Part of the linear algebra notes
For some real numbers , there is a unique real number such that . One example is . The product 1 is interesting because it is the multiplicative identity for real numbers (). Not every real number has an inverse (namely, 0).
Matrices are similar: some matrices have an inverse matrix such that the identity matrix – the multiplicative identity for matrices. The notation is used because it’s reminiscent of raising real numbers to the power .
Nonsquare matrices are clearly noninvertible (there is no one matrix where and are both defined, except for square matrices)
Singular matrices (where one row or column can be reduced to all zeroes) are noninvertable.
Linear systems look like where is a matrix, is an unknown vector, and is a known vector.
Multiply both sides on the left by . Then you have . The left side collapses to the identity matrix (by definition) times , which equals . Then you just need to find which is a straightforward matrix vector product.
Inverting a matrix is a lot of work and not every matrix is invertible. So this method is best when:
If you are working by hand and only have one matrix equation to solve it’s usually easier to augment the matrix and do gaussian elimination.
One way to find the inverse is to solve all of those equations to reveal each column of B. (Or, symmetrically, reveal columns of A)
Instead of setting up lots of little equations, you can solve them all at once. Make this
and row-reduce the whole thing. If the matrix is invertible, when row-reduced the left side looks like the identity matrix and the right side contains the inverse matrix
Basically you’re solving “ adjoined with ”, “ adjoined with ”, and “ adjoined with ” at the same time because the solutions don’t interfere with each other.
To invert
simply calculate
Note that is the determinant of the matrix. That’s why the determinant being 0 implies a noninvertible matrix (you can’t divide by the determinant)
A matrix is “ill-conditioned” if small changes to in can result in large changes to . This text doesn’t define “small” and “large”; illconditionedness is a domain-specific classification, something that’s useful to know if you’re solving linear systems for some real-world application. The term comes from numerical analysis.
You can spot ill-conditioned matrices because the inverse has big numbers when the regular matrix has small numbers. Book mentions the “Hilbert matrix”, which is composed entirely of small unit fractions (half, third, fourth etc) but its inverse contains numbers as large as 4 million in the 6x6 case. (And, oddly enough, are all integers.)