People who went through basic linear algebra might think they know everything there is to know about determinant, rank, etc. But consider the following three propositions:
1. any rank 1 matrix can be represented as , where are column vectors.
2. has eigenvalue with right eigenvector and left eigenvector . All the other eigenvalues are . In particular it is diagonalizable, but the eigenvectors are not necessarily orthogonal.
3. If is a rank 1 square matrix, then . This combined with 1 gives .
These were used without proof in proposition 1.8.1 of Peter Forrester’s book on random matrices and log gases. I found the proofs in Silvia Osnaga’s paper on rank 1 complex matrices; it looks like an expository paper to me.
Proof of 1. Rank 1 means there is one column such that all the other columns are multiples of it. Thus is that reference column, and entries of are the multipliers.
Proof of 2. has entries . So direct computation gives, . If is orthogonal to , then . Now we have two cases, either is orthogonal to , in which case the matrix is the 0 matrix, nothing interesting. Or they are not orthogonal, then and the orthogonal complement of span the ambient vector space, hence they form an eigen basis, showing is diagonalizable, i.e., .
Proof of 3. Diagonalize with from proof of 2. Since determinant is invariant under conjugation, we get since trace is also invariant under conjugation.
Questions for the reader:
1. are there any generalization of the above to rank 2?
2. Where can I find a good proof of the fact , where is matrix exponential.