My first serious encounter of the idea of transposition came with the famous card shuffling model, pioneered by Diaconis and Shahshahani. The model is solved exactly in terms of convergence rate in a variety of metrics. The Kac model is another that essentially generalizes the operation into a continuous state space. A more recent success story in the Markov chain business that incorporate transpositions is the famous Aldous’ exclusion process conjecture. The model is simple: put n distinguishable pebbles on a graph, and at each point of time, pick a random pair of vertices and transpose the pebbles on them. The conjecture is that the relaxation time of this model in continuous time is exactly the same as the corresponding simple random walk on the graph.
In computer science, especially machine learning which I am recently involved in, there is a famous iterative algorithm for solving support vector machine problems, called SMO, introduced by John Platt in early 2000. Recall the dual formulation of SVM requires solving a quadratic programming problem involving n variables, corresponding to all the data points. The idea of SMO is to pick a random pair of data points at each step and optimize the objective function along the 1-dimensional subdomain that keep all other variables constant. This is in fact more similar to Kac’s model. Unfortunately I don’t see a way to translate the mixing time result of the latter to any application of SMO. My personal challenge then is to find other instances where random transposition, in either discrete or continuous form, plays a useful role, since it seems clear that many randomized algorithm with the object to diffuse through an entire symmetric group should be decomposable into these pairwise operations.