## Section2.2Linear Independence and Bases

### Subsection2.2.1Independence Day

Consider the following set of vectors:

\begin{equation*} S=\left\{ \left[ \begin{array}{r} 1 \\ 0 \end{array} \right], \left[ \begin{array}{r} -4 \\ 1 \end{array} \right], \left[ \begin{array}{r} 1 \\ -2 \end{array} \right] \right\} \end{equation*}

from our work in Subsection 2.1.1 we know that each of these can be written as a combination of the elementary vectors $\vec{e}_1$ and $\vec{e}_2\text{.}$ They can also be written as combinations of each other like so:

\begin{equation*} \left[ \begin{array}{r} 1 \\ -2 \end{array} \right]= -2\, \left[ \begin{array}{r} -4 \\ 1 \end{array} \right]-7\, \left[ \begin{array}{r} 1 \\ 0 \end{array} \right], \end{equation*}

or

\begin{equation*} \left[ \begin{array}{r} 1 \\ 0 \end{array} \right]= -\frac{2}{7}\, \left[ \begin{array}{r} -4 \\ 1 \end{array} \right]-\frac{1}{7}\, \left[ \begin{array}{r} 1 \\ -2 \end{array} \right], \end{equation*}

or finally

\begin{equation*} \left[ \begin{array}{r} -4 \\ 1 \end{array} \right]= -\frac{1}{2}\, \left[ \begin{array}{r} 1 \\ -2 \end{array} \right]-\frac{7}{2}\, \left[ \begin{array}{r} 1 \\ 0 \end{array} \right]. \end{equation*}

Because we can write these vectors in relation to each other in this way we say that they are linearly dependent.

###### Definition2.2.1. Linearly Independent.

A set of vecotors, $S=\left\{\vec{v}_1,\vec{v}_2,\vec{v}_3,\ldots,\vec{v}_n\right\}$ is linearly independent if no single vector in the set may be written as a linear combination of the other vectors in the set. Equivalently, if the homogeneous system of equations

\begin{equation*} a_1\, \vec{v}_1+a_2\, \vec{v}_2+a_3\, \vec{v}_3+\cdots + a_n\, \vec{v}_n=\vec{0} \end{equation*}

has only the trivial solution $a_i=0,\ \forall\, i\text{.}$ A set of vectors which are not linearly independent are called linearly dependent.

In contrast to the example above here is a linearly independent:

\begin{equation*} S=\left\{ \left[ \begin{array}{r} 1 \\ -2 \\ 0 \end{array} \right], \left[ \begin{array}{r} 1 \\ 1 \\ 0 \end{array} \right], \left[ \begin{array}{r} 1 \\ 0 \\1 \end{array} \right] \right\}. \end{equation*}

We could demonstrate this with a lengthy argument about what combinations are possible or not possible, but it is better to set up and solve the system of equations

\begin{equation*} a_1\, \left[ \begin{array}{r} 1 \\ -2 \\ 0 \end{array} \right]+a_2\, \left[ \begin{array}{r} 1 \\ 1 \\ 0 \end{array} \right]+a_3\, \left[ \begin{array}{r} 1 \\ 0 \\1 \end{array} \right]= \vec{0} \end{equation*}

useing the techniques from Section 1.1 to show that $a_1=a_2=a_3=0\text{,}$ that is that there is only one solution to this system. In fact a linearly dependent set of vectors always corresponds to a system with a free variable. Underdetermined systems like the first example above, which is really the same as Example 1.1.4, will always be linearly dependent. A set of vectors which are linearly independent always correspond to a system which is overdetermined or which has the same number of variables as equations with no free variables like the vectors in Investigation 1.1.2 or Example 1.1.5.

Important Point: The following implications are only true in one direction!

• If a set of vectors is linearly independent, then it corresponds to a system which is overdetermined or which has the same number of variables as equations.
• If a system is underdetermined, then it corresponds to a set of vectors which are linearly dependent.

To see that the converse 1 The converse of a statement If P, then Q is If Q, then P. of these are not true complete the following exercises.

Solve this overdetermined system to see that there is a free variable:

\begin{equation*} \left[ \begin{array}{r} 1 \\ 2 \\ 3 \end{array} \right]x+ \left[ \begin{array}{r} 2 \\ 4 \\ 6 \end{array} \right]y= \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] \end{equation*}

After reducing we see that $x=-2y$ and $y$ is free.

Solve this "determined" system to see that there are free variables:

\begin{equation*} \left[ \begin{array}{r} 1 \\ 2 \\ 3 \end{array} \right]x+ \left[ \begin{array}{r} 2 \\ 4 \\ 6 \end{array} \right]y+ \left[ \begin{array}{r} 3 \\ 6 \\ 9 \end{array} \right]z= \left[ \begin{array}{r} 0 \\ 0 \\ 0 \end{array} \right] \end{equation*}

After reducing we see that $x=-2y-3z$ and $y$ and $z$ are both free.

### Subsection2.2.2A Strong Foundation

As mentioned above the vectors in Example 1.1.5 are linearly independent, but the system is inconsistent, i.e. there is not solution in that example. From the perspective of Section 2.1 and in particular the example in Figure 2.1.4 the point $(7,2,16)$ is not in the plain determined by the vectors $[3,1,0]$ and $[2,-1,4]\text{.}$ Convince yourself of this by altering the Sage code below to plot the appropriate plain and point:

The reason this happens is because the vectors $[3,1,0]$ and $[2,-1,4]$ do not form a basis for 3D space $\left(\mathbb{R}^3\right)\text{.}$

###### Definition2.2.4. Basis.

A basis is a set of vectors $\mathcal{B}=\left\{\vec{v}_1,\vec{v}_2,\vec{v}_3,\ldots,\vec{v}_n\right\}$ such that there exists a unique solution to any equation of the form

\begin{equation*} a_1\, \vec{v}_1+a_2\, \vec{v}_2+a_3\, \vec{v}_3+\cdots + a_n\, \vec{v}_n=\vec{v} \end{equation*}

for all $\vec{v}$ in the given vector space. The number of vectors required for a basis in a vector space is called the dimension of the vector space.

Try solving the system of equations:

\begin{equation*} a_1\, \left[ \begin{array}{r} 1 \\ -2 \\ 1 \end{array} \right]+a_2\, \left[ \begin{array}{r} 1 \\ 1 \\ 0 \end{array} \right]= \left[ \begin{array}{r} 0 \\ 0 \\ 13 \end{array} \right] \end{equation*}

you should find that this system is inconsistent, this tells us that the two vectors on the left hand side do not form a basis for the vector space.

A reduced system should give you something like:

\begin{equation*} \left[ \begin{array}{rr|r} 1 \amp 0 \amp 13\\ 0 \amp 1 \amp -13\\ 0 \amp 1 \amp 26\\ \end{array} \right] \end{equation*}

giving us the contradictory result that $a_2=26$ and $a_2=-13\text{.}$ In terms of what we have been discussing this means that the point $(0,0,13)$ is not in the plain generated by the two vectors.

Try solving the system of equations:

\begin{equation*} a_1\, \left[ \begin{array}{r} 1 \\ -2 \\ 1 \end{array} \right]+a_2\, \left[ \begin{array}{r} 1 \\ 1 \\ 0 \end{array} \right]+a_3\, \left[ \begin{array}{r} 1 \\ 0 \\ 0 \end{array} \right]+a_4\, \left[ \begin{array}{r} 0 \\ 1 \\ 1 \end{array} \right]= \left[ \begin{array}{r} x \\ y \\ z \end{array} \right] \end{equation*}

you should find that even though we don't know the values of $x\text{,}$ $y\text{,}$ and $z$ this system is consistent and in fact there will always be a free variable/infinitely many solutions.

A reduced system should give you something like:

\begin{equation*} \left[ \begin{array}{rrrr|r} 1 \amp 0 \amp 0 \amp 1 \amp z\\ 0 \amp 1 \amp 0 \amp 3 \amp y+2z\\ 0 \amp 0 \amp 1 \amp -4 \amp x-3z-y\\ \end{array} \right] \end{equation*}

so that $a_1=z-a_4\text{,}$ $a_2=y+2z-3a_4\text{,}$ $a_3=x-3z-y+4a_4\text{,}$ and $a_4$ is a free variable, so there are infinitely many solutions. Written as a set of vectors we would say that the solutions are the set

\begin{equation*} \left\{ \left[ \begin{array}{r} z \\ y+2z \\ x-3z-y \end{array} \right]+ \left[ \begin{array}{r} -1 \\ -3 \\ 4 \end{array} \right]\, t : t\in\mathbb{R} \right\}. \end{equation*}

Try solving the system of equations:

\begin{equation*} a_1\, \left[ \begin{array}{r} 1 \\ -2 \\ 1 \end{array} \right]+a_2\, \left[ \begin{array}{r} 1 \\ 1 \\ 0 \end{array} \right]+a_3\, \left[ \begin{array}{r} 1 \\ 0 \\ 0 \end{array} \right]= \left[ \begin{array}{r} x \\ y \\ z \end{array} \right] \end{equation*}

you should find that even though we don't know the values of $x\text{,}$ $y\text{,}$ and $z$ this system is consistent and there will always be a unique solution.

A reduced system should give you something like:

\begin{equation*} \left[ \begin{array}{rrr|r} 1 \amp 0 \amp 0 \amp z\\ 0 \amp 1 \amp 0 \amp y+2z\\ 0 \amp 0 \amp 1 \amp x-3z-y\\ \end{array} \right] \end{equation*}

so that $a_1=z\text{,}$ $a_2=y+2z\text{,}$ and $a_3=x-3z-y\text{.}$ So, when we can place the coefficient matrix in reduced row echelon form with no extra columns or rows we get a unique solution.

Try solving the system of equations:

\begin{equation*} a_1\, \left[ \begin{array}{r} 1 \\ -2 \\ 1 \end{array} \right]+a_2\, \left[ \begin{array}{r} 0 \\ 1 \\ 1 \end{array} \right]+a_3\, \left[ \begin{array}{r} 1 \\ -3 \\ 0 \end{array} \right]= \left[ \begin{array}{r} x \\ y \\ z \end{array} \right] \end{equation*}

you should find that even though this looks similar to the previous exercise, there are in fact values for $x\text{,}$ $y\text{,}$ and $z$ for which there is not solution. Find and example of this and try to explain why this happens

A reduced system should give you something like:

\begin{equation*} \left[ \begin{array}{rrr|r} 1 \amp 0 \amp 0 \amp x\\ 0 \amp 1 \amp -1 \amp y+2x\\ 0 \amp 0 \amp 0 \amp z-3x-y\\ \end{array} \right] \end{equation*}

so that if $z-2x-y$ is not zero we get a contradiction in the last row. An example of this being an inconsistent system would be $[x,y,z]=[1,1,4]\text{.}$ What this tells us is that the three vectors on the left hand side of the equation would form at most a plain and that some points such as $(1,1,4)$ are not on that plain.

It should hopefully be clear that by taking linear combinations of $1\text{,}$ $x\text{,}$ and $x^2$ we can get any quadratic $a_0+a_1\, x+a_2\, x^2$ thus the set $\mathcal{B}=\left\{1,x,x^2\right\}$ is a basis for the set of all quadratics. What is not so obvious is that so is the set $\mathcal{C}=\left\{1-2x+x^2, 1+x, 1\right\}\text{.}$ However, if we recognize/recall that we can relate systems of polynomials with vector equations then solving

\begin{equation*} a_0(1-2x+x^2)+a_1(1+x)+a_2(1)=b_0+b_1x+b_2x^2 \end{equation*}

is the same as solving

\begin{equation*} a_1\, \left[ \begin{array}{r} 1 \\ -2 \\ 1 \end{array} \right]+a_2\, \left[ \begin{array}{r} 1 \\ 1 \\ 0 \end{array} \right]+a_3\, \left[ \begin{array}{r} 1 \\ 0 \\ 0 \end{array} \right]= \left[ \begin{array}{r} b_0 \\ b_1 \\ b_2 \end{array} \right] \end{equation*}

which is the same as Checkpoint 2.2.7 and so we know there is a solution.

What this tells us is that we can apply the concept of basis to any type of vector space as long as it satisfies Definition 2.1.3. Also, on a final note, for each vector space there is typically a basis which is considered the standard basis. For $\mathbb{R}^3$ it is

\begin{equation*} \mathcal{E}=\left\{ \left[ \begin{array}{r} 1 \\ 0 \\ 0 \end{array} \right], \left[ \begin{array}{r} 0 \\ 1 \\ 0 \end{array} \right], \left[ \begin{array}{r} 0 \\ 0 \\1 \end{array} \right] \right\} \end{equation*}

and for quadratic polynomials it is

\begin{equation*} \mathcal{E}=\left\{1,x,x^2\right\}. \end{equation*}

We will see in Section 3.4 that we will be able to directly relate the standard bases with any other bases.

###### Section Vocabulary.

Linearly Independent, Linearly Dependent, Basis, Standard Basis