Bellman–Ford Algorithm
In the previous lessons, we explored Minimum Spanning Tree algorithms such as Prim’s and Kruskal’s, which focus on connecting all vertices with minimum total cost.
Now we shift our attention to a different but equally important problem: finding the shortest path.
In this lesson, we study the Bellman–Ford algorithm, a powerful algorithm designed to compute shortest paths even when negative edge weights are present.
The Shortest Path Problem
The shortest path problem asks a simple question:
What is the minimum cost required to travel from a source vertex to all other vertices in a graph?
This problem appears in many real-world systems, including navigation, routing, scheduling, and cost optimization.
Why Bellman–Ford Exists
You may wonder why we need Bellman–Ford when faster algorithms like Dijkstra exist.
The answer lies in one critical limitation:
Dijkstra’s algorithm does NOT work when negative edge weights are present.
Bellman–Ford solves this problem safely and can even detect negative weight cycles.
What Is a Negative Weight Cycle?
A negative weight cycle is a cycle in a graph whose total sum of edge weights is negative.
If such a cycle exists, the shortest path is undefined because you can keep looping and reducing cost forever.
Bellman–Ford is one of the few algorithms that can reliably detect this situation.
The Core Idea of Bellman–Ford
Bellman–Ford uses a technique called edge relaxation.
Instead of greedily picking the next best node, it repeatedly relaxes all edges until the shortest distances stabilize.
This makes the algorithm slower, but also more robust.
How Bellman–Ford Works
The algorithm follows these logical steps:
- Initialize distances from the source to all vertices as infinity
- Set the source distance to zero
- Relax all edges exactly (V − 1) times
- Check for negative weight cycles
Why (V − 1) times? Because the longest possible shortest path can have at most (V − 1) edges.
Edge Relaxation Explained
Relaxation means checking whether a shorter path to a vertex exists through another vertex.
If the new path is cheaper, we update the distance.
This process gradually improves all distances until no better path exists.
Example Graph
Consider the following weighted graph with a negative edge:
edges = [
('A', 'B', 4),
('A', 'C', 5),
('B', 'C', -3),
('C', 'D', 4),
('D', 'B', 2)
]
Bellman–Ford Algorithm Implementation
Below is a clear implementation of the Bellman–Ford algorithm.
def bellman_ford(vertices, edges, source):
distance = {v: float('inf') for v in vertices}
distance[source] = 0
for _ in range(len(vertices) - 1):
for u, v, weight in edges:
if distance[u] + weight < distance[v]:
distance[v] = distance[u] + weight
for u, v, weight in edges:
if distance[u] + weight < distance[v]:
return "Negative weight cycle detected"
return distance
vertices = {'A', 'B', 'C', 'D'}
print(bellman_ford(vertices, edges, 'A'))
If a negative cycle exists, the algorithm clearly reports it.
Understanding the Output
The result gives the shortest distance from the source vertex to every other vertex.
If no negative cycle exists, these distances are guaranteed to be correct.
Time Complexity
Bellman–Ford is slower than Dijkstra.
Time complexity:
O(V × E)
This is why it is used only when negative weights matter.
When Should You Use Bellman–Ford?
Bellman–Ford is the right choice when:
- The graph contains negative edge weights
- Negative cycle detection is required
- Correctness is more important than speed
In financial systems and cost arbitrage problems, Bellman–Ford is often preferred.
Real-World Example
Consider currency exchange rates.
Negative cycles can represent profitable arbitrage opportunities.
Bellman–Ford helps detect whether such loops exist in the system.
Exercises
Exercise 1:
Why does Bellman–Ford relax edges (V − 1) times?
Exercise 2:
What happens if a negative cycle exists?
Exercise 3:
Can Dijkstra detect negative cycles?
Quick Quiz
Q1. Is Bellman–Ford a greedy algorithm?
Q2. What is Bellman–Ford best known for?
In the next lesson, we will continue exploring shortest paths by studying Binary Trees and Tree Traversals, building a strong foundation for hierarchical data structures.