Algebraic combinatorics lecture 6: RS(K), Dual RSK, and analytic definition of Schur functions

First we have an easy result

Proposition 1

\displaystyle  h_\mu = \sum_{\lambda \vdash n} K_{\lambda,\mu} s_\lambda \ \ \ \ \ (1)

where {K_{\lambda,\mu}} is the number of SSYT of shape {\lambda} and type {\mu}.

Proof: We know that

\displaystyle  \prod_{i,j} (1-x_i y_j)^{-1} = \sum_\lambda m_\lambda(x) h_\lambda(y) =^{\rm{RSK}} \sum_\lambda s_\lambda(x) s_\lambda(y) \ \ \ \ \ (2)

Equating {m_\lambda} coefficient of both sides, we get {h_\lambda} on LHS and {\sum K_{\mu,\lambda} s_\mu} on RHS. \Box

Thus we also have {\sum_\lambda K_{\lambda,\mu} K_{\lambda,\nu} = N_{\mu,\nu} := \langle h_\mu, h_\lambda \rangle}.
It is known that computing {N_{\mu,\nu}} is a {\#}-P complete problem (M. Dyer). But does that imply {K_{\lambda,\mu}} is also {\#}-P complete to compute? Nobody knows.
Important special case: {K_{\lambda,1^n} =} the number of standard Young tableaux of shape {\lambda} is equal to {f(\lambda)} which counts many things, in particular the dimension of the irreducible representation of {S_n} corresponding to {\lambda} (as we mentioned in the previous lecture). Consider

\displaystyle  h_{1^n} = (h_1)^n = (x_1 + \ldots + x_m)^n =\sum_\lambda K_{\lambda,1^n} s_\lambda. \ \ \ \ \ (3)

Looking at the {x_1 \ldots x_m} coefficient of both sides, we get LHS = {n!} and RHS {= \sum_{\lambda \vdash n} f(\lambda)^2} because {s_\lambda = \sum_\mu K_{\lambda,\mu} m_\mu} (look at the {\mu = 1^m} term).

Also for any group {G}, we have {|G| = \sum_\lambda \chi_\lambda(1)^2}.
The next topic is to remove the K from RSK (joke of course), i.e., one could use {{\mathbb N}}-arrays without repeated values. This is achieved by standardization of the 2-line array with repeated values, and it is done in the most naive way:

  1. Replace the top row {i_1, \ldots, i_n} by {1, \ldots, n}.
  2. Suppose there were {c_1} ones in the second row before, now replace them in order by {1, \ldots, c_1}, and if there were {c_2} twos before, replace them from left to right by {c_1 + 1, \ldots, c_1 + c_2}, i.e., in the most natural way to fill the second row with nonrepeating natural numbers.
  3. Perform Schensted algorithm on the standardized 2-line array to obtain {\hat{P}} and {\hat{Q}}.
  4. One could then recover the original SSYTs {P} and {Q} by replacing {k} in {\hat{P}} and {\hat{Q}} by {j_k} and {i_k} respectively.

This doesn’t seem that interesting.
Next we explore the symmetries of RSK algorithm:

Proposition 2 Let the {{\mathbb N}}-matrix {A} correspond via RSK to the pair of Young tableaux {(P,Q)}, then {A^T} corresponds to {(Q,P)}.

Persi didn’t give a proof in class because it wasn’t so inspiring, but I find the following geometric proof in Stanley worth mentioning; note that it only works for permutations matrices {A}, corresponding to 2-line arrays with no repetition in each row. Proof: Consider the following procedure:

  1. Draw an {n \times n} grid.
  2. Replace the {1}s in the permutation matrix {A} by X’s in the corresponding positions of the grid.
  3. Label all the leftmost and bottom corners (so there are {2n+1} of them) of the grid with {\emptyset}.
  4. Inductively label the interior corners using four local rules, which we do not detail because the book didn’t address the well-definedness of these rules very well, imbedded several claims such as {\lambda =\mu} necessarily without explanation, and also seems to contain a typo in the top row second to the last entry of the table (7.50) (it should be {22} instead of {211}).
  5. The last column forms a sequence of partitions, increasing in the dominance order, by viewing {21} say as the partition {1^1 2^1}. Insert {i} into the box that the {i}th partition differs from the {(i-1)}th. This gives a standard Young tableau; call it {P}. Similarly call the SYT obtained from the top row {Q}. This indeed gives the RSK correspondence.

Now the symmetry of RSK with respect to transposition is immediate, as the procedure above works on the matrix {A} itself. \Box

This in particular implies that if {\omega \in S_n} corresponds to {(P,Q)}, then {\omega^{-1}} corresponds to {(Q,P)}.

Corollary 3

  1. {A} is symmetric if and only if {P= Q} if and only if {\rm{col}(A) = \rm{row}(A) = \rm{type}(P) = \rm{type}(Q)}.
  2. {\omega^2 =1 \in S_n} implies {P = Q}.

.

Corollary 4

\displaystyle  \frac{1}{\prod_i (1-x_i) \prod_{i < j} (1-x_i x_j)} = \sum_\lambda s_\lambda \ \ \ \ \ (4)

Proof: Expanding the LHS, we get

\displaystyle   \prod_i [\sum_{a_{ii} =0}^\infty x_i^{a_{ii}}] \prod_{i < j} [\sum_{a_{ij} =0}^\infty (x_i x_j)^{a_{ij}}] \ \ \ \ \ (5)

which which we suggestively separated into the diagonal and off-diagonal terms.
The {x^\alpha} coefficient of the left hand side is given thus by the number of ways of choosing {a_{ii}}‘s and {a_{ij}}‘s with {i< j} such that for each {i}, {\sum_{i< j} a_{ij} + a_{ii} + \sum_{j > i}a_{ij} = \alpha_i}, where the last summand comes from reversing the row of {i} and {j} in (5). This is the number of {{\mathbb N}}-matrices with row sum equal to {\alpha}.
The right hand side clearly gives the number of SSYT of type {\alpha}, because we are summing over all shapes {\lambda} that can potentially afford type {\alpha}. Finally, by the previous corollary, these two numbers are equal. \Box

2. Dual RSK

Here we work with {0/1} matrices, in anticipation of the duality between bononic and fermionic space, or that between {h_\lambda}‘s and {e_\lambda}‘s. As an example, from the following matrix

\displaystyle  \begin{array}{ccc} 1 & 0 & 1\\ 0 & 1 & 0\\ 1 & 0 & 1 \end{array} \ \ \ \ \ (6)

we obtain the two-line array

\displaystyle  \begin{array}{ccccc} 1 & 1 & 2 & 3 & 3\\ 1 & 3 & 2 & 1 & 3 \end{array} \ \ \ \ \ (7)

from which we perform an RSK algorithm that bumps a number in the first row that’s largest and smaller than or equal to the current number (as opposed to smallest and larger than the current).

\displaystyle  1 \Rightarrow \begin{array}{cc}1 & 3 \end{array} \Rightarrow \begin{array}{cc} 1 & 2\\ 3 &\end{array} \Rightarrow \begin{array}{cc} 1 & 2\\ 1 & \\ 3 & \end{array} \Rightarrow \begin{array}{ccc} 1 & 2 & 3\\ 1 & & \\ 3 & & \end{array} = P \ \ \ \ \ (8)

with corresponding {Q} tableau

\displaystyle  1 \Rightarrow \begin{array}{cc}1 & 1 \end{array} \Rightarrow \begin{array}{cc} 1 & 1\\ 2 &\end{array} \Rightarrow \begin{array}{cc} 1 & 1\\ 2 & \\ 3 & \end{array} \Rightarrow \begin{array}{ccc} 1 & 1 & 3\\ 2 & & \\ 3 & & \end{array} = Q \ \ \ \ \ (9)

Theorem 5 There is a 1-1 correspondence of {0/1} matrices {A} with {(P,Q)} of the same shape such that {P^t,Q} are SSYTs. Furthermore {\rm{row}(A) = \rm{type}(Q)} and {\rm{col}{A} = \rm{type}(P)}.

The proof is analogous to the bosonic case, and authors usually omit it.

Corollary 6

\displaystyle  \prod_{ij} (1+ x_i y_j) = \sum_\lambda s_\lambda(x) s_{\lambda^t}(y) \ \ \ \ \ (10)

Proof: Same reasoning as before: the {x^\alpha y^\beta} coefficient on left hand side counts the number of {0/1} matrices with row sum {\alpha} and colsum {\beta}. Note there is no inverse sign on the left. And that on the right hand side counts SSYTs whose shapes are transposes of each other, and with type {\alpha} for one and {\beta} for the other. \Box

Corollary 7 Recall the involution {\omega: \Lambda \rightarrow \Lambda}, with {\omega(h_\lambda) = e_\lambda}. We have

\displaystyle  \omega(s_\lambda) = s_{\lambda^t}. \ \ \ \ \ (11)

Proof: We already proved several lectures ago that

\displaystyle  \prod_{ij} (1-x_i y_j)^{-1} = \sum_\lambda m_\lambda(x) h_\lambda(y) \\ \prod_{ij} (1+x_i y_j) = \sum_\lambda m_\lambda(x) e_\lambda(y) \ \ \ \ \ (12)

By orthonormality of {s_\lambda}, we also know that

\displaystyle  \sum_\lambda s_\lambda(x) s_\lambda(y) = \prod_{ij} (1- x_i y_j)^{-1}. \ \ \ \ \ (13)

Applying {\omega_y} ({\omega} acting on only the {y} variable) to everything and using the previous corollary finishes the proof. \Box

3. Classical definition of Schur functions

Let {\alpha = (\alpha_1, \alpha_2, \ldots)} be a composition that terminates at {n}th term, and let {\omega \in S_n} act on {x^\alpha} by

\displaystyle  \omega(x^\alpha) = x_1^{\alpha_{\omega(1)}} \ldots x_n^{\alpha_{\omega(n)}}. \ \ \ \ \ (14)

Define

\displaystyle  a_\alpha(x_1, \ldots, x_n) = \sum_\omega \omega(x^\alpha) \epsilon(\omega) = \det(x_i^{\alpha_j}) \ \ \ \ \ (15)

where {\epsilon} is the signature. Then we have

Theorem 8 For {\delta = (n-1, n-2, \ldots, 1,0)}, and {\lambda = (\lambda_1, \lambda_2, \ldots, \lambda_{l(\lambda)}, 0 ,\ldots, 0)} a partition, viewed as a composition of length {n}. Then

\displaystyle  s_\lambda(x_1, \ldots, x_n) = \frac{a_{\lambda + \delta}}{a_\delta}. \ \ \ \ \ (16)

This apparently was first known to Cauchy by fiddling.

Advertisements

About aquazorcarson

math PhD at Stanford, studying probability
This entry was posted in algebraic combinatorics, Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s