Part of the linear algebra notes

Matrix inverse

For some real numbers aa, there is a unique real number bb such that ab=1ab = 1. One example is a=5,b=51=1/5a = 5, b = 5^{-1} = 1/5. The product 1 is interesting because it is the multiplicative identity for real numbers (1a=a1a = a). Not every real number has an inverse (namely, 0).

Matrices are similar: some matrices AA have an inverse matrix A1A^{-1} such that AA1=A1A=AA^{-1} = A^{-1}A = the identity matrix – the multiplicative identity for matrices. The notation A1A^{-1} is used because it’s reminiscent of raising real numbers to the power 1-1.

Noninvertible matrices

Nonsquare matrices are clearly noninvertible (there is no one matrix where AXAX and XAXA are both defined, except for square matrices)

Singular matrices (where one row or column can be reduced to all zeroes) are noninvertable.

Properties of the inverse

Solving linear systems with the inverse

Linear systems look like Ax=bAx = b where AA is a matrix, xx is an unknown vector, and bb is a known vector.

Multiply both sides on the left by A1A^{-1}. Then you have A1Ax=A1bA^{-1}Ax=A^{-1}b. The left side collapses to the identity matrix (by definition) times xx, which equals xx. Then you just need to find A1bA^{-1}b which is a straightforward matrix vector product.

Inverting a matrix is a lot of work and not every matrix is invertible. So this method is best when:

If you are working by hand and only have one matrix equation to solve it’s usually easier to augment the matrix and do gaussian elimination.

Finding the inverse

One way to find the inverse is to solve all of those equations to reveal each column of B. (Or, symmetrically, reveal columns of A)

Instead of setting up lots of little equations, you can solve them all at once. Make this

[abc100def010ghi001] \begin{bmatrix} a&b&c&1&0&0\\ d&e&f&0&1&0\\ g&h&i&0&0&1 \end{bmatrix}

and row-reduce the whole thing. If the matrix is invertible, when row-reduced the left side looks like the identity matrix and the right side contains the inverse matrix

[100abc010def001ghi] \begin{bmatrix} 1&0&0&a'&b'&c'\\ 0&1&0&d'&e'&f'\\ 0&0&1&g'&h'&i' \end{bmatrix}

Basically you’re solving “AA adjoined with e1e_1”, “AA adjoined with e2e_2”, and “AA adjoined with e3e_3” at the same time because the solutions don’t interfere with each other.

Inverse of a 2x2 matrix

To invert

[abcd] \begin{bmatrix}a&b\\c&d\end{bmatrix}

simply calculate

1adbc[dbca] \frac{1}{ad-bc}\begin{bmatrix}d&-b\\-c&a\end{bmatrix}

Note that adbcad-bc is the determinant of the matrix. That’s why the determinant being 0 implies a noninvertible matrix (you can’t divide by the determinant)

“Ill conditioned”

A matrix AA is “ill-conditioned” if small changes to bb in Ax=bAx=b can result in large changes to xx. This text doesn’t define “small” and “large”; illconditionedness is a domain-specific classification, something that’s useful to know if you’re solving linear systems for some real-world application. The term comes from numerical analysis.

You can spot ill-conditioned matrices because the inverse has big numbers when the regular matrix has small numbers. Book mentions the “Hilbert matrix”, which is composed entirely of small unit fractions (half, third, fourth etc) but its inverse contains numbers as large as 4 million in the 6x6 case. (And, oddly enough, are all integers.)