Skip to main content

Section 4.3 Diagonalizable Matrices, Eigenvalues, and Eigenvectors

Subsection 4.3.1 Diagonalizable Matrices

Definition 4.3.1 Diagonalizable

We say that an \(n\times n\) matrix \(A\) is diagonalizable if it is similar (Definition 4.2.2) to a diagonal matrix \(D\text{.}\) That is if there exists an invertible matrix \(P\) such that

\begin{equation*} A=PDP^{-1}. \end{equation*}
Investigation 4.3.1 A First Basic Example

Consider the matrix

\begin{equation*} A=\left[ \begin{array}{rr} 4 \amp -2 \\ 1 \amp 1 \end{array} \right] \end{equation*}

and let's try to find \(D\) and \(P\) such that

\begin{equation*} A=PDP^{-1}. \end{equation*}

Note, this is the same as

\begin{equation*} AP=PD, \end{equation*}

so letting

\begin{equation*} P=\left[ \begin{array}{rr} p_1 \amp p_2 \\ p_3 \amp p_4 \end{array} \right]\ \mbox{and}\ D=\left[ \begin{array}{rr} \lambda_1 \amp 0 \\ 0 \amp \lambda_2 \end{array} \right] \end{equation*}

then we get the system of equations

\begin{align*} 4p_1-2p_3\amp =\lambda_1 p_1\\ p_1+p_3\amp =\lambda_1 p_3\\ 4p_2-2p_4\amp =\lambda_2 p_2\\ p_2+p_4\amp =\lambda_2 p_4. \end{align*}

Since the second two equations are the same as the first two we really just need to solve one pair. If we set up an augmented matrix and solve we get:

\begin{align*} \left[ \begin{array}{cc|c} (4-\lambda) \amp -2 \amp 0\\ 1 \amp (1-\lambda) \amp 0 \end{array} \right]\amp \leadsto \left[ \begin{array}{cc|c} 1 \amp (1-\lambda) \amp 0\\ 0 \amp -2-(1-\lambda)(4-\lambda) \amp 0 \end{array} \right]\\ \amp \leadsto \left[ \begin{array}{cc|c} 1 \amp (1-\lambda) \amp 0\\ 0 \amp -1(\lambda^2-5\lambda+6) \amp 0 \end{array} \right]. \end{align*}

So, if \(p_3\neq0\text{,}\) then we get \(\lambda = 2\) or \(3\) and \(p_1=p_3\) or \(2\, p_3\text{.}\) Therefore, we get two vectors and two values for \(\lambda\text{,}\) if \(\lambda=2\) then the vector we want looks like

\begin{equation*} \vec{x}=k\, \left( \begin{array}{r} 1\\ 1 \end{array} \right) \end{equation*}

and if \(\lambda=3\) then the vector we want looks like

\begin{equation*} \vec{x}=k\, \left( \begin{array}{r} 2\\ 1 \end{array} \right). \end{equation*}

Finally, this is what we need because

\begin{align*} A\left[ \begin{array}{rr} 1 \amp 2\\ 1 \amp 1 \end{array} \right]\amp= \left[ \begin{array}{rr} 2 \amp 6\\ 2 \amp 3 \end{array} \right]\\ \amp=\left[ \begin{array}{rr} 1 \amp 2\\ 1 \amp 1 \end{array} \right] \left[ \begin{array}{rr} 2 \amp 0\\ 0 \amp 3 \end{array} \right]. \end{align*}

Thus, with

\begin{equation*} P=\left[ \begin{array}{rr} 1 \amp 2 \\ 1 \amp 1 \end{array} \right]\ \mbox{and}\ D=\left[ \begin{array}{rr} 2 \amp 0 \\ 0 \amp 3 \end{array} \right], \end{equation*}

we finally have \(A=PDP^{-1}\text{.}\)

The vectors we found along the way and the values for \(\lambda\) both play a special roll in linear algebra, they are called eigenvectors and eigenvalues; they are the topic of the next section (Subsection 4.3.2).

Investigation 4.3.2 A Non-Example

Insert example of a matrix that can't be diagonalized.

Subsection 4.3.2 Eigenvalues and Eigenvectors

Definition 4.3.2 Eigenvalues and Eigenvectors

An eigenvector of a matrix \(A\) is a non-zero vector \(\vec{x}\) such that

\begin{equation*} A\vec{x}=\lambda \vec{x} \end{equation*}

for some constant \(\lambda\) which is called the corresponding eigenvalue.

Investigation 4.3.3

Let's revisit Investigation 4.3.1 since we already know the answers we should get we can focus on how we might get them. We are looking at

\begin{equation*} A\vec{x}=\lambda \vec{x} \end{equation*}

which is the same as

\begin{equation*} A\vec{x}-\lambda \vec{x}=\vec{0}\ \mbox{or}\ (A-\lambda I)\, \vec{x}=\vec{0}. \end{equation*}

If \(\vec{x}\neq \vec{0}\text{,}\) then \((A-\lambda I)\) must be singular (a.k.a. non-invertible or a zero divisor) or zero, either way \(det(A-\lambda I)=0\text{.}\) That is,

\begin{align*} |det(A-\lambda I)|\amp = \left| \left[ \begin{array}{cc} 4-\lambda \amp -2\\ 1 \amp 1-\lambda \end{array} \right] \right|\\ \amp = |(4-\lambda)(1-\lambda)+2|\\ \amp = |\lambda^2-5\lambda+6|\\ \amp = 0. \end{align*}

Solving for \(\lambda\) we get \(\lambda = 2\) or \(3\text{,}\) as before.

If we now take our values for \(\lambda\) and substitute them back into our original equation we can find our values for \(\vec{x}\text{.}\) Let \(\lambda=2\) so that we are solving

\begin{equation*} \left[ \begin{array}{rr} 2 \amp -2 \\ 1 \amp -1 \end{array} \right]\vec{x}=\vec{0}. \end{equation*}

Row reducing the matrix we quickly get one free variable and all the possible solutions look like

\begin{equation*} \vec{x}=k\, \left( \begin{array}{r} 1\\ 1 \end{array} \right). \end{equation*}

Similarly, if we let \(\lambda=3\) we need to solve

\begin{equation*} \left[ \begin{array}{rr} 1 \amp -2 \\ 1 \amp -2 \end{array} \right]\vec{x}=\vec{0}, \end{equation*}

which yields solutions of the form

\begin{equation*} \vec{x}=k\, \left( \begin{array}{r} 2\\ 1 \end{array} \right). \end{equation*}
Final Remarks: