We can generalize the Single source shortest path problem to find the shortest path between all pairs of vertices.
Worst case to beat
Run Bellman-Ford algorithm on each vertex, treating it as a source. This gives us a runtime of , with worst case . Therefore, is the runtime we want to beat.
Using DP
One approach is to use DP. Maybe decompose path w.r.t. the last edge that we chose to include?
Naively
A -function could look like
Base case occurs when we have . If , then should return 0, otherwise it’s infinity, because we’ve used up all our edges and still haven’t formed a path between them. If and , we should also return 0 for the same reason.
A generalized -function could look like
for all edges . Essentially we are minimizing the weight of the next edge by checking each possible one and picking the smallest to extend the path by.
This leads to a bad runtime
We have subproblems: and both come from , and we use at most edges between them. Next, we iterate over the neighbor set of to form the path , which is at most . Assuming we cache results, our runtime is still , which is not any better than Bellman-Ford.
Another way of visualizing it is to consider the algorithm in a bottom-up approach. Iterate through , iterate through each pair of vertices , and consider each such that exists. We have the summation:
This is equal to, worst case, .
Cutting the time we spend at each subproblem
The issue above is that while we restrict the number of edges we can use, we are not restricting where the edges can lead (e.g. the vertices). To reduce runtime, we should only consider a subset of the vertices at each iteration. This leads us to the Floyd-Warshall algorithm.