Part of the linear algebra notes
A vector space is any mathematical object you can “do linear algebra” to. Specifically, think about the operations involved in Gaussian elimination.
Here are some examples of vector spaces:
When we talk about matrices as a vector space, we only worry about matrix-matrix addition and matrix-scalar scaling. We don’t concern ourseles with matrix multiplication. Matrix multiplication is a useful operation, it just doesn’t correspond to any vector space concepts.
Similarly for polynomials/function spaces. When we talk about functions as a vector space, we are just concerning ourselves with the ability to add and scale functions. The act of applying the function to some particular doesn’t correspond to any vector space concepts. That’s why we can talk about functions like as points in the space of polynomials, and we can perform linear algebra with the polynomials, even though is not a “linear function”.
A subspace is a subset of a vector space that is closed under the vector space operations (addition and scaling). To check that a subset is a subspace:
Intuition: There’s no way out of Flatland. Given the elements of the subspace as a construction kit, there is no way to scale and add them in such a way that you leave the subspace.
All vector spaces are subspaces of themselves.
Some texts use “must be nonempty” instead of “must contain the zero element”. These are equivalent definitions; because a subspace must be closed under scaling, and you can always scale an element by 0, a subspace must contain the zero element.
Examples.
Counterexamples.
Consider the real plane and think about the point . Remember that a subspace is closed under scaling. So if you include you must also include , and , and , and . In fact, just by knowing that the subspace contains , you can conclude that it also needs to include the whole line .
Now consider adding . You now have two vectors which aren’t colinear, and you can start scaling and adding them to trace out the entire plane. So any subspace of the real plane containing and is the whole plane.
Consider any vector space , and think about the subset containing only the zero object of .
So this subset is indeed a subspace. This is called the zero subspace of .
A minimal set of vectors which can be used to span an entire space.
For example, the two vectors and are enough to span the entire real plane.
Todo blah blah
The size of that set.
The dimension of the smallest space containing (aka a line) is 1. Planes are 2. Volumes are 3. Etc. It’s like the number of different “directions” or “degrees of freedom” in the space.
The dimension need not be finite. The space of all polynomials has basis and it has infinite size.
The basis of the zero subspace is an empty set (instead of a set containing just the zero element), so the dimension of the zero subspace is 0.
A function from one vector space to another that preseves the structure of the vector space. Category theorists would call this a “morphism” I think?
Basically a function that “distrbutes” over both vector space operations.
For example, 2x2 matrices can be converted into 4-element vectors by plucking out the four components in some order. This is a linear map because adding and scaling matrices corresponds to adding and scaling the vector, and the zero matrix is sent to the zero vector. (Many vector spaces can be converted to and from -element vectors, for some .)
You can also take 2x2 matrices to a real number by picking the upper-left corner. Or convert matrices to a real number by taking the sum of the elements. These maps “throw away information”, but that’s fine.
All matrices can be used as linear maps where . Here, stands for some vector.
Generally when we use a “space” term on a matrix (like, when we talk about the span of , or the dimension of ) we are talking about the properties of the map .
The set of where . It is remarkable that this is a null space and not just a null set.
Recall that is a linear map and all linear maps leave 0 fixed, so the zero vector is always in the null space. Some matrices don’t “squish” space, so the only vector that ends up at zero was the vector already at 0. In these cases, the null space of that matrix is just the zero space.
Other matrix operations squish more points onto zero, and then the null space is larger.
The dimension of the null space is called the “nullity”. A matrix with high nullity squishes space a lot, and a matrix with low nullity squishes space a little or not at all.
The set of possible values where . Again, interesting that this always forms a space, not just a set.
Equivalently, the column space is the span of the matrix’s column vectors.
Some matrix operations don’t squish space, so the range space is the same as the original space. Other matrix operations squish into a subspace.
The dimension of the column space is called the “rank”. A matrix with high rank “covers” lots of the output space, and a matrix with low rank squishes space a lot.
If a matrix has columns, then the nullity of that matrix, plus the rank of that matrix, is .
In other words: A matrix can choose to send some values to 0, shrinking its column space – but then the null space grows by the same amount. In this sense the null space and the column space are sort of “opposites”.
The column space is the span of the column vectors, so the row space is the span of the row vectors.
Important things to know:
Therefore if you transpose a matrix, do row operations, and transpose it back, you preserve the column space. This is a great way to actually state the column space of a matrix in a simple form: transpose, reduce, transpose.
Maybe you wonder why we have “row operations” and not “column operations”. This is why: they are redundant.