First it is useful to have the Hall inner product on , the space of all symmetric functions in infinitely many variables. It is defined by

To show it’s symmetric, we check it on the basis . Then it follows from the fact that each transition coefficient from to is symmetric, because it counts the number of natural number valued matrices with prescribed row and column sum given by . Whenever two sets of basis functions satisfy (13), we say they are dual bases. In that case, we have

Proposition 4

The proof is somewhat opaque, but straightforward. *Proof:* By change of bases, we have , and , for some coefficients and . We use Greek letter indices because the partitions are not totally ordered a priori. The fact that and are dual translate to , where . The fact are dual to each other is equivalent to . Now the proposition is true for and , thus we have

Persi mentioned that it would be nice to have a more conceptual proof of the above. Luckily I found it, and will present it below: *Proof:* (second time) We define two integral kernels by, for , :

If we fix an orthonormal basis of under the Hall inner product, it determines an isomorphism given by and can be written in terms of as a square matrix. The condition translates to . Hence we must have by uniqueness of group inverse. But can be represented as an integral kernel

and for coming from , , is computed to be . So it must be the same for general dual pair , as well, i.e., is the identity kernel on . As a consequence, we see that

for all symmetric functions on variables. I checked this for using Cauchy integral formula and it worked:

assuming .

Using the fact that , we have

Corollary 5

- The normalized power sum polynomials forms an orthonormal basis, whose existence implies that the Hall inner product is positive definite. Thus ‘s are self dual.
- The involution is a self-adjoint isometry.

Again the second statement is checked on ‘s. Secret: The Hall inner product actually comes from the inner product on . This is perhaps as natural as it could get. Recall the set of conjugation invariant functions on , consists of those with for any . They are necessarily symmetric functions of the eigenvalues of , because being conjugation invariant implies they are functions of the eigenvalue vector of and since one can conjugate by permutation matrices, has to be symmetric in the components of as well. A FACT to be proved later:

Thus we see the joint distribution of the eigenvalues associated with the Haar distributed unitary matrix ensemble is given by the quadratic Vandermonde factor in the integrand. Quadratic because unitary ensemble corresponds to invariance parameter , so for or , will be and respectively, but there are other changes to the joint distribution functions as well. Next we hasten to define Schur functions combinatorially, which are quite a mouthful. For each partition one associates a tableau (pl. tableaux) of shape (look up Wikipedia), then

Definition 6A semi-standard Young Tableaux is an assignment of boxes of with natural numbers (repeating allowed) that are weakly increasing from left to right in each row, and strictly increasing down each column. Also define (pronounced Lambda minus Miu), by removing the boxes of from , assuming the latter is dominated by the former. We can similar deifne semistandard Young tableaux associated with . Then Schur functions indexed by is defined by

where where is the number of boxes in with label in .

Facts:

- (should be totally not obvious).
- where is a partition of anything with at most n parts give all the irreducible characters of . The restriction on number of parts is cleraly necessary because only has eigenvalues, hence for example if , then there won’t be any feasible SSYT that is admissible in the defining sum of as a function of n variables, thus it becomes the trivial character, repeating the one associated with .