ANONYMOUS wrote:
> Hello again! For the lecture today I struggled quite a bit to understand so I watched some youtube videos online to explain and I wanted to ask if the formula for w`(u + v) can be written as in the image below instead of in the lecture?
>
> Also is Johnson's Algorithm just Bellman-Ford once then reweight with it and finish with Dijkstra's ? Thanks so much for your help!
Firstly, the equation you gave is the same as the one in the lecture. We just use h(u) instead of d[u]. So I am not sure what you are asking? Perhaps you could rephrase the question? Secondly, yes, Johnson's algorithm is effectively run Bellman-Ford, re-weight the graph, then run repeated Dijkstra's. The only tricky part is coming up with a good re weighting strategy.
Here is how I think about the reweighting method. This is different to how it was covered in the lecture and might help.
We want to re weight the graph in such a way that every path between any pair of vertices, say u and v, is changed by the same amount regardless of the number of edges in the path. Say d1(u,v) is the distance of a path in the original graph, and d2(u,v) is the distance for some other path from u to v. Also, say d1'(u,v) and d2'(u,v) are the distances in the re weighted graph. We want it to be that case that d1(u,v)-d1'(u,v) = d2(u,v)-d2'(u,v). Basically, all paths between a pair of nodes change by the same amount.
To achieve the re weighting, we're going to use a "potential function", call it h(u). We will re weight an edge as follows: w'(u,v) = w(u,v) + h(u) - h(v) where w'(u,v) is the new weight and w(u,v) is the old weight.
Regardless of how we define h(u), this has a nice property. The weight of a path (u, x1), (x1, x2), ..., (xn, v) in the reweighted graph becomes w'(u, x1) + w'(x1, x2) + ... + w'(xn, v). If you substitute in w'(u,v) = w(u,v) + h(u) - h(v), the sum becomes w(u, x1) + h(u) - h(x1) + w(x1, x2) + h(x1) - h(x2) + ... + w(xn, v) + h(xn)-h(v), which simplifies to w(u, x1) + w(x1, x2) + ... + w(xn, v) + h(u) - h(v). This is because +h(x1)-h(x1)+h(x2)-h(x2)... and so on cancel leaving only the original weights and +h(u)-h(v). This means that the weight of every path from u to v changed by the same amount: h(u)-h(v)! No matter what the intermediate edges were, all the h(x*) parts cancel!
So, any node potential function h(u) will not change which path is the shortest path. All we need to do is pick one so that w'(u,v) is non-negative, since Dijkstra's algorithm can only handle non-negative edge weights. If we pick h(u) = distance(s, u) for a special source node (as explained in the slides), then every edge w'(u,v) will end up being non-negative. Consider the case where w(u,v) is negative. Think about the definition: w'(u,v) = w(u,v) + h(u) - h(v). We will prove that h(u)-h(v)>=w(u,v). Recall that we defined h(x)=distance(s,u). We can show that distance(s,v)<distance(s,u), because the shortest path to v could just go to u and take the negative weight edge w(u,v). Put another way, the shortest path to v is either shorter than distance(s,u)+w(u,v) or is exactly distance(s,u)+w(u,v), since it always has the option of choosing distance(s,u)+w(u,v). This is enough to show that h(u)-h(v)>=w(u,v), which means w'(u,v)>=0.