Eric Rains’ proof that the eigenvalues of a high power of a Haar distributed random matrix are independent uniformly distributed

One of my main research objectives has been to understand random walks on the compact simple Lie groups, most notably the special orthogonal groups. However unitary groups have much simpler structures in terms of their joint eigenvalue distributions. Hence I ran into a paper of Eric Rains’, called High powers of random elements of compact Lie groups, published in probability theory and related fields. One of the central results of the paper is calculating the joint eigenvalue density of U^k, where k \ge n and U is a uniformly chosen element of the unitary group U(n). He needed the joint distributions of the kth power of the eigenvalues in order to compute the second moment of the number of eigenvalues in a given arc on the unit circle (\theta - \alpha, \theta + \alpha) \subset \mathbb{T}^1. The case k < n has been computed by Diaconis and Shahshahani in “On the Eigenvalues of random matrices”. The surprising discovery in Eric’s paper is that for k \ge n, the joint distribution is actually iid uniform on the circle! This greatly facilitates probabilistic calculations with respect to the spectrum of high powers of random matrices. Here we give the full proof of this result:

Our starting point is the joint engenvalue density of a Haar distributed random element in U(n):

\displaystyle p(\lambda_1,\ldots, \lambda_n) = \frac{1}{n!} \prod_{1 \le i < j \le n} |\lambda_i - \lambda_j|^2

Here \lambda_i \in \mathbb{T}^1 and |\lambda|^2 = \lambda \bar{\lambda} where \bar{\lambda} denotes the complex conjugate of \lambda. This formula can be derived from Weyl integration formula, which in turn is an advanced exercise in change of variable formula, together with throwing away some sets of measure 0. An excellent reference for its derivation is Daniel Bump’s book Lie Groups and Frank Adams’ book under the same title.

Let’s call the joint density \frac{1}{n!} \Delta. Then \Delta is a Laurent polynomial in n variables \lambda_1, \ldots, \lambda_n, meaning it consists of finite sum of integer powers (positive or negative) of these n variables. Next we want to investigate the joint density of eigenvalues of U^k. Let’s consider the test function interpretation of joint density:  if we have a symmetric function p: \mathbb{T}^n \to \mathbb{R} (symmetric so that it’s a test function of a random set of n points),  then

\displaystyle \int_{\mathbb{T}^n} p(\lambda_1^k,\ldots,\lambda_n^k)\frac{1}{n!} \Delta d\lambda_1\ldots d\lambda_n

is clearly the expectation of p(\mu_1,\ldots, \mu_n) where \mu_i is the $i$th eigenvalue of U^k under some ordering; in fact \mu_i = \lambda_i^k is one choice of ordering.

Now observe that p(\lambda_1,\ldots,\lambda_n) \Delta is a sum of monomials of the form \lambda_1^{\alpha_1} \ldots \lambda_n^{\alpha_n}, with \alpha_i \in \mathbb{Z}. Furtheremore, if one of the \alpha_i‘s is nonzero, then integral over \mathbb{T}^n would be zero by elementary calculus: \int_0^{2\pi} e^{i\alpha \theta} d\theta = 0 for \alpha \neq 0. Thus the only nonzero contribution in \int_{\mathbb{T}^n} p \Delta comes from the constant term in the Laurent polynomial form. Now since all the monomial terms in p(\lambda_1^k,\ldots,\lambda_n^k) have powers given by a multiple of k, they can be canceled to yield 0th power only by the terms in \Delta whose powers are also multiples of k.

Finally by an elementary calculation one sees that all the \lambda_j‘s have maximum power in absolute value n-1 in \Delta; this comes from collecting one contributing factor from each  factor |\lambda_j - \lambda_i|^2 for i \neq j. Thus when k \ge n, the only term in \Delta whose powers are all multiples of k is the constant term, meaning

\displaystyle \int_{\mathbb{T}^n} p(\lambda_1^k,\ldots,\lambda_n^k) \Delta d\lambda = C\int_{[0,2\pi)^n} \tilde{p}(k\theta_1,\ldots,k\theta_n) d\theta_1\ldots d\theta_n

where I picked the global coorindate [0,2\pi)^n for \mathbb{T}^n, and \tilde{p}(\theta_1,\ldots,\theta_n) = p(e^{i\theta_1},\ldots, e^{i \theta_n}). But after the change of variable (\theta_1, \ldots, \theta_n) \mapsto (k \theta_1,\ldots, k \theta_n), the right hand side above is clearly the expectation of p evaluated at an iid uniform sequence of random variables valued in \mathbb{T}.

So what I learned from this computation is that the joint density formula \Delta is not as cumbersome and inaccessible as it first appears. It is nothing more than a Laurent polynomial.

Advertisements

About aquazorcarson

math PhD at Stanford, studying probability
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s