Part of the linear algebra notes

Eigenstuff

(just taking notes during class on my computer)

Examples:

The zero vector is always an eigenvector but it’s boring so we usually don’t talk about it.

The amount that a particular eigenvector gets scaled during the transformation is the eigenvalue. For example, if a transformation AA takes vector vv to 2v2v, then vv is an eigenvector of AA and its eigenvalue is 22.

trick to finding eigenvectors

It feels like it’d be easier to find the eigenvectors then look for their values. But it’s actually easier to find the eigenvalues then look for the corresponding vectors.

The usual trick is to subtract some number from everything along the diagonal and tweak the number until the determinant equals 0.

A=[aλbcdλ]A = \begin{bmatrix}a - λ & b \\ c & d - λ\end{bmatrix} detA=(aλ)(dλ)(bc)det A = (a-λ)(d-λ) - (bc)

This is a quadratic and meh. Theres a good 3blue1brown about what this exactly means. ANyway so you’ll get some solutions of λ, so you plug those lambdas into

(AλiI)x=0(A - λ_{i}I)x = 0

for each eigenvalue λ0λ_0, λ1λ_1 etc, where xx is the eigenvector you’re looking for, and λiIλ_{i}I is like the identity matrix but with λiλ_{i} in place of 1’s on the diagonal.

algebraic multiplicity

you might have equations like (1λ)(1λ)(2λ)(1-λ)(1-λ)(2-λ)

it still has 3 eigen vectors even though it’s (1λ)2(2λ)(1-λ)^{2}(2-λ)

eigenspace

uhhhhh