Some fact about matrices j矩阵知识

For latex rendered version, please visit
https://aquazorcarson.wordpress.com/2010/09/17/some-facts-about-matrices-%E4%B8%80%E4%BA%9B%E5%85%B3%E4%BA%8E%E7%9F%A9%E9%98%B5%E7%9A%84%E7%9F%A5%E8%AF%86/

People who went through basic linear algebra might think they know everything there is to know about determinant, rank, etc. But consider the following three propositions:

1. any rank 1 matrix can be represented as $u v^perp$, where $u,v$ are column vectors.

2. $u v^perp$ has eigenvalue $v^perp u$ with right eigenvector $u$ and left eigenvector $v^perp$. All the other eigenvalues are $0$. In particular it is diagonalizable, but the eigenvectors are not necessarily orthogonal.

3. If $M$ is a rank 1 square matrix, then $det (I + M) = 1 + tr M$. This combined with 1 gives $det(I + u v^perp) = 1 + v^perp u$.

These were used without proof in proposition 1.8.1 of Peter Forrester’s book on random matrices and log gases. I found the proofs in Silvia Osnaga’s paper on rank 1 complex matrices; it looks like an expository paper to me.

Proof of 1. Rank 1 means there is one column such that all the other columns are multiples of it. Thus $u$ is that reference column, and entries of $v$ are the multipliers.

Proof of 2. $u v^perp$ has entries $u_i v_j$. So direct computation gives, $(u v^perp u)_i = sum_j u_i v_i u_j = (v^perp u) u_i$. If $w$ is orthogonal to $v$, then $u v^perp w = 0$. Now we have two cases, either $u$ is orthogonal to $v$, in which case the matrix is the 0 matrix, nothing interesting. Or they are not orthogonal, then $u$ and the orthogonal complement of $v$ span the ambient vector space, hence they form an eigen basis, showing $u v^perp$ is diagonalizable, i.e., $A^{-1} u v^perp A= rm{diag}(v^perp u, 0, ldots, 0)$.

Proof of 3. Diagonalize $I + M = I+ uv^perp$ with $A$ from proof of 2. Since determinant is invariant under conjugation, we get $det (I+M) = det (I + rm{diag}(v^perp u, 0, ldots, 0)) = 1 + v^perp u = 1 + tr M$ since trace is also invariant under conjugation.

Questions for the reader:
1. are there any generalization of the above to rank 2?

2. Where can I find a good proof of the fact $exp tr = det exp$, where $exp$ is matrix exponential.

Advertisements

About aquazorcarson

math PhD at Stanford, studying probability
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s