Skip to main content

Section 2.1 What is a Vector Space

Subsection 2.1.1 Something Old, Something New

Long ago an far away you learned to plot points in two dimensions with \(x\) and \(y\) coordinates:

Figure 2.1.1. A Few Points

But, how can we understand this in terms of vectors and how can we most efficiently describe infinitely many points and vectors?

The simplest thing to do would be to have a single vector like \(\vec{v}=\left[ 2,3\right]\text{,}\) but with infinitely many points to choose from this isn't very efficient. A little better is to recognize that that can be written as a sum of two vectors like so

\begin{equation*} \vec{v}=\left[ 2,3\right]=\left[ 2,0\right]+\left[ 0,3\right]. \end{equation*}

Best of all is to realize that this is a linear combination of our elementary vectors

\begin{equation*} \vec{v}=\left[ 2,3\right]=2\vec{e}_1+3\vec{e}_2. \end{equation*}
Figure 2.1.2. A Few Vectors

Use what you learned in Section 1.4 to recreate the image above in the Sage Cell below. Then adjust the Sage code in the cell to create similar pictures for the points \((1,3)\text{,}\) \((-2,1)\text{,}\) and \((-3,-2)\text{.}\) Are you convinced that you can always make this sort of picture? Can you tell that this mens you can always write any vector as a combination of copies of the elementary vectors?

Because we can always write any vector (or describe steps to any point) using copies of elementary vectors \(\vec{e}_1=[1,0]\) and \(\vec{e}_1=[1,0]\) we say that they form a basis, which we will explore in detail in the next section. For now what this tells us is that we can describe all the points in \(\mathbb{R}^2\) as the set

\begin{equation*} \mathbb{R}^2=\left\{a\, \vec{e}_1+b\, \vec{e}_2:a,b\in\mathbb{R}\right\} \end{equation*}

Subsection 2.1.2 It's Not All About the Arrows

Consider the set of all quadratic polynomials

\begin{equation*} \mathcal{P_2}=\left\{ax^2+bx+c:a,b,c\in\mathbb{R}\right\} \end{equation*}

which you should hopefully recognize as all the possible parabolas. If we take two of these, \(v(x)=2x^2+3x\) and \(u(x)=-x^2+5\text{,}\) we can add and subtract them

\begin{equation*} v(x)+u(x)=x^2+3x+5\ \mbox{ and }\ v(x)-u(x)=3x^2+3x-5, \end{equation*}

we can multiply them by constants

\begin{equation*} 5u(x)=-5x^2+25, \end{equation*}

and in fact we can take any linear combinations we would like

\begin{equation*} -2v(x)+5u(x)=-9x^2-6x+25. \end{equation*}

All of this means that we can manipulate these polynomials just like we vectors. In fact if you look back at Subsection 1.2.2 you will see that the coefficients we used for \(u(x)\) and \(v(x)\) are the same as the entries we had in \(\vec{u}\) and \(\vec{v}\text{.}\)

We can take the analogy even further by setting up systems of equations involving polynomials. For example can we find scalars \(a\) and \(b\) such that

\begin{equation*} a\, v(x)+b\, u(x)=19x^2+21x-25? \end{equation*}

Expanding the left hand side we get

\begin{equation*} 2ax^2+3ax-bx^2+5b=19x^2+21x-25 \end{equation*}

which gives us three equations when we compare coefficients

\begin{align*} 2ax^2-bx^2\amp = 19\\ 3ax\amp = 21\\ 5b\amp= -25. \end{align*}

By inspection (i.e. just looking at it for a minute) we can see that \(a=7\) and \(b=-5\text{.}\)

Finally, one more observation. If we let \(e_2(x)=x^2\text{,}\) \(e_1(x)=x\text{,}\) and \(e_0(x)=1\text{,}\) then we can write every possible quadratic polynomial (every parabola) as a combination of \(e_2\text{,}\) \(e_1\text{,}\) and \(e_0\text{.}\) This means that we can define a set of elementary polynomials which we can combine to give us all the rest. We can form a basis of polynomials.

So, these quadratic polynomials can be added like vectors, they can be subtracted like vectors, they can be multiplied by scalars like vectors, they can be put together in linear combinations like vectors, and we can solve systems of equations with them just like vectors. Therefore, they are in a very real sense vectors.

Subsection 2.1.3 It's Kinda All the Same

Definition 2.1.3.

a vector space is a pair of sets, vectors and scalars, together with a pair of binary operations, addition and scalar multiplication, that satisfy the following conditions: if \(v\text{,}\) \(u\text{,}\) and \(w\) are vectors and \(a\) and \(b\) are scalars, then

  • \(a\,v +b\, v\) is another vector (closure)
  • there exists a vector 0 such that \(v+0=0+v=v\) (additive identity)
  • for each \(v\) there exists \(-v\) such that \(v+\, -v=-v+v=0\) (additive inverses)
  • there exists a scalar 1 such that \(1\, v=v\) (multiplicative identity)
  • \(v+u=u+v\) (commutative law)
  • \((v+u)+w=v+(u+w)\) and \((ab)v=a(bv)\) (associative laws)
  • \(a(u+v)=au+av\) and \((a+b)v=av+bv\) (distributive laws)

As observed above the coefficients in \(v(x)=2x^2+3x\) and \(u(x)=-x^2+5\) are the same as the entries in \(\vec{v}=[2,3,0]\) and \(\vec{u}=[-1,0,5]\) in Subsection 1.2.2 and so we can rewrite the system we solved above as a vector equation

\begin{equation*} a\, \vec{v}+b\, \vec{u} = a\left[ \begin{array}{r} 2\\ 3\\ 0 \end{array} \right] + b\left[ \begin{array}{r} -1\\ 0\\ 5 \end{array} \right]= \left[ \begin{array}{r} 19\\ 21\\ -25 \end{array} \right] \end{equation*}

which we can solve using techniques from the previous chapter. That is we can solve the problem as follows:

\begin{align*} \left[ \begin{array}{rr|r} 2 \amp -1 \amp 19\\ 3 \amp 0 \amp 21\\ 0 \amp 5 \amp -25 \end{array} \right]\amp\amp\amp \stackrel{\large \frac{1}{3}R_2,\, \frac{1}{5}R_3}{\Huge \leadsto}\amp\amp \left[ \begin{array}{rr|r} 2 \amp -1 \amp 19\\ 1 \amp 0 \amp 7\\ 0 \amp 1 \amp -5 \end{array} \right]\\ \amp\amp\amp \stackrel{\large Swap\ Rows}{\Huge \leadsto}\amp\amp \left[ \begin{array}{rr|r} 1 \amp 0 \amp 7\\ 0 \amp 1 \amp -5\\ 2 \amp -1 \amp 19 \end{array} \right]\\ \amp\amp\amp \stackrel{\large R_3-2R_1+R_2}{\Huge \leadsto}\amp\amp \left[ \begin{array}{rr|r} 1 \amp 0 \amp 7\\ 0 \amp 1 \amp -5\\ 0 \amp 0 \amp 0 \end{array} \right] \end{align*}

What this hopefully highlights is that any two sets that satisfy Definition 2.1.3 and in some sense are the same dimension then whatever we say about one we can say about the other. And, any visualization of one is, in a way, also a visualization of the other.

Figure 2.1.4. Subspaces and Plains and Polynomials

Every point on that plain can be written as a combination of \(\vec{v}=[2,3,0]\) and \(\vec{u}=[-1,0,5]\) and so equivalently could be the coefficients for a polynomial of the form \(f(x)=a\, v(x)+b\, u(x)\text{.}\) In particular we see that the point \((19,21,-15)\) is on the plane. But not all points are on the plain, for example we see that the point \((15,9,5)\) is not, this means there does not exist \(a\) and \(b\) such that \(a\, v(x)+b\, u(x)=15x^2+9x+5\) or \(a\vec{v}+b\vec{u}=[15,9,5]\text{.}\) In fact if we try to solve for \(a\) and \(b\) in the vector equation it doesn't work:

\begin{align*} \left[ \begin{array}{rr|r} 2 \amp -1 \amp 15\\ 3 \amp 0 \amp 9\\ 0 \amp 5 \amp 5 \end{array} \right]\amp\amp\amp \stackrel{\large \frac{1}{3}R_2,\, \frac{1}{5}R_3}{\Huge \leadsto}\amp\amp \left[ \begin{array}{rr|r} 2 \amp -1 \amp 15\\ 1 \amp 0 \amp 3\\ 0 \amp 1 \amp -1 \end{array} \right]\\ \amp\amp\amp \stackrel{\large Swap\ Rows}{\Huge \leadsto}\amp\amp \left[ \begin{array}{rr|r} 1 \amp 0 \amp 3\\ 0 \amp 1 \amp -1\\ 2 \amp -1 \amp 15 \end{array} \right]\\ \amp\amp\amp \stackrel{\large R_3-2R_1+R_2}{\Huge \leadsto}\amp\amp \left[ \begin{array}{rr|r} 1 \amp 0 \amp 3\\ 0 \amp 1 \amp -1\\ 0 \amp 0 \amp 8 \end{array} \right] \end{align*}

which implies \(0=8\text{,}\) which is nonsense. We say then that all the linear combinations of \(\vec{v}\) and \(\vec{u}\) form a subspace, that is they form a vector space of their own, but don't give you every possible point.

Section Vocabulary.

Vector Space, Subspace