Part of the linear algebra notes
Matrix inverse
For some real numbers , there is a unique real number such that . One example is . The product 1 is interesting because it is the multiplicative identity for real numbers (). Not every real number has an inverse (namely, 0).
Matrices are similar: some matrices have an inverse matrix such that the identity matrix – the multiplicative identity for matrices. The notation is used because it’s reminiscent of raising real numbers to the power .
- Unlike real numbers, there are many matrices that lack a multiplicative inverse (not just 0).
- We call matrices that do have an inverse “invertible”.
- There are two ways to multiply matrices ( is not necessarily the same as ), but the multiplication must result in the identity matrix in either direction.
- This implies all invertible matrices are square.
- The inverse is unique.
Noninvertible matrices
Nonsquare matrices are clearly noninvertible (there is no one matrix where and are both defined, except for square matrices)
Singular matrices (where one row or column can be reduced to all zeroes) are noninvertable.
- If you can spot that one row is clearly a multiple of another, then the matrix is noninvertable. Not all singular matrices are like that (especially large ones) but it’s a good first check
- If the determinant is 0 then the matrix is noninvertable
Properties of the inverse
- If one exists, the inverse is unique (so it makes sense to talk about “the” inverse)
- if and are invertible then so is ; its inverse is
- The inverse of the inverse is the original
- The transpose of the inverse is the inverse of the transpose
Solving linear systems with the inverse
Linear systems look like where is a matrix, is an unknown vector, and is a known vector.
Multiply both sides on the left by . Then you have . The left side collapses to the identity matrix (by definition) times , which equals . Then you just need to find which is a straightforward matrix vector product.
Inverting a matrix is a lot of work and not every matrix is invertible. So this method is best when:
- you have a lot of equations , , to solve. Finding will help you stamp out lots of solutions.
- you are using a computer.
If you are working by hand and only have one matrix equation to solve it’s usually easier to augment the matrix and do gaussian elimination.
Finding the inverse
- If then times the first column of equals the first column of , which is also written as
- If then times the second column of equals the second column of , which is also written as
- And so on
One way to find the inverse is to solve all of those equations to reveal each column of B. (Or, symmetrically, reveal columns of A)
Instead of setting up lots of little equations, you can solve them all at once. Make this
and row-reduce the whole thing. If the matrix is invertible, when row-reduced the left side looks like the identity matrix and the right side contains the inverse matrix
Basically you’re solving “ adjoined with ”, “ adjoined with ”, and “ adjoined with ” at the same time because the solutions don’t interfere with each other.
Inverse of a 2x2 matrix
To invert
simply calculate
Note that is the determinant of the matrix. That’s why the determinant being 0 implies a noninvertible matrix (you can’t divide by the determinant)
“Ill conditioned”
A matrix is “ill-conditioned” if small changes to in can result in large changes to . This text doesn’t define “small” and “large”; illconditionedness is a domain-specific classification, something that’s useful to know if you’re solving linear systems for some real-world application. The term comes from numerical analysis.
You can spot ill-conditioned matrices because the inverse has big numbers when the regular matrix has small numbers. Book mentions the “Hilbert matrix”, which is composed entirely of small unit fractions (half, third, fourth etc) but its inverse contains numbers as large as 4 million in the 6x6 case. (And, oddly enough, are all integers.)