Praised as one of the best textbooks on algorithms, CLRS nonetheless suffers from a certain degree of excessive unwarant formalism that makes it hard to follow on a portable device such as my android phone. Here is one example: when explaining the topological sort algorithm, the authors introduced the notion of finishing time without defining it. In another case, the word back-edge was used in the cycle-detecting algorithm (as part of the prerequisite for top-sort). Most of the time it is ok to do that since the definition can be inferred by context or is self-explanatory. There are other cases, such as the edge relaxation sub-algorithm, that was mentioned in the very beginning of all the graph chapters, but kept being reused throughout. This made the description of the Dijkstra algorithm less than self-contained in my opinion. Relaxation takes three parameters, two vertices u and v, and the weight function . To relax means to assign v the distance . It seems to me that such a simple thing really can be explained on the fly, as opposed to be proxied by a sub-routine.

Complaints aside, the struggle with a tiny screen of inter-related chapters in such a quantitative book as CLRS forced me to really remember the crux of the algorithm. One thing I missed before is the fact that the node with smallest tentative distance is always selected after a previous node is marked off as distance-determined. After seeing the priority queue construct in CLRS, I thought I would double-check wikipedia as I didn’t remember the need to select the smallest among all the frontier nodes. I was wrong of course.

Next comes Bellman-Ford. The first step is to label the single source s as having distance 0, and all the rest distance . The CLRS seems to give a rather inefficient version of the algorithm, namely repeat |V| times the following procedure: relax each directed edge against the weight function w. Then check whether there is a negative distance cycle. The for loop certainly will achieve the objective that every path of length at most |V| is exploited, but seems quite wasteful: why not just branch out from the single source in a breadth first manner? Let’s think about how much saving that generates. The CLRS way costs |V||E| steps. The BFS way probably reduces that by a factor of 2, since we are not really doing BFS here, but a sort of sampling with replacement BFS, and for |V| steps. The saving only comes from triangularization, which is not much, certainly not magnitude less.

Another interesting graph related problem, not covered in CLRS, is the following bipartite matching problem (aka stable marriage problem): assume n students are to match with n advisors. Each member of either side has a priority list of members of the opposite party. A good matching is defined so that the following does not occur: two students a,b are matched with two advisors c,d respectively, and a prefers d to c, and d prefers a to b. Find a good matching. For selfish reasons, I don’t want to reveal the source of this problem.

The solution algorithm is in my opinion much harder to come by than Dijkstra (not to mention Bellman-Ford), though the utility is much less I suppose. It belongs to game theory proper. First the students each propose to his most beloved prospective advisor; this means several students could go for a single prof. Then each prof that was chosen selects his favorite among the suitors. Call this round one. In round two, each rejected student chooses his or her second favorite prof, and each of the profs again chooses among his suitors this time, plus the student he accepted in round one. This means the prof can sometimes trade up, ie., giving up his previous choice for someone better. Continue this for n steps, where n is number of students or profs. To show the algorithm is correct, first note that in the end every one gets paired with someone else, since if a student does not find an advisor, he or she would keep going down the list until all advisors are exhausted, but thanks to monogamy, at that point of time he or she would have found an advisor.

Next to show stability, suppose student s is paired with advisor a. For the sake of contradiction say s actuallys prefers b better than a and b prefers s better than t to which he is paired with at the end. Then s would have reached b before reaching a, and b having been chosen by s, would not have given up for someone less preferable than s, which however must hold if b eventually chose t who is inferior to s. Clearly if this is the starting point of coming up with the above algorithm, one would have to be a master at juggling several relations in the head. Fortunately with the algorithm already in place, it’s easier to see through the stability argument.