Skip to main content

Appendix C A Not So Formal Glossary of Essential Terms

augmented matrix

An augmented matrix is a matrix to which a vector or another matrix has been appended. E.g.

\begin{equation*} \left[ \begin{array}{rr|r} 4 \amp 3 \amp 5\\ 2 \amp 5 \amp 6 \end{array} \right] \end{equation*}
basis

A set of vectors \(\mathcal{B}=\left\{\vec{v}_1,\, \vec{v}_2,\, \ldots\, \vec{v}_n\right\}\) is a basis for a vector space \(V\) if every vector in \(V\) can be written uniquely as a linear combination of the vectors in \(\mathcal{B}\text{.}\) The basis is called orthonormal if each vector has unit length and each pair of distinct vectors are orthogonal.

characteristic polynomial

The characteristic polynomial of a matrix \(A\) is the determinant of the matrix \(A-xI\text{,}\) where \(x\) is a variable and \(I\) is the identity matrix. The roots of the characteristic polynomial are the eigenvectors of the matrix.

change of basis matrix

A change of basis matrix (also sometimes called a change of coordinate matrix) converts coordinates for a vector written in terms of one basis into coordinates in a different basis. In particular if \(\mathcal{E}\) is the standard basis, and \(\mathcal{B}=\left\{\vec{v}_1,\, \vec{v}_2,\, \ldots\, \vec{v}_n\right\}\) is any other basis then the matrix

\begin{equation*} \mathcal{P}_\mathcal{B}= \left[ \vec{v}_1\, \vec{v}_2\, \cdots\ \vec{v}_n \right] \end{equation*}

converts vectors from \(\mathcal{B}-coordinates\) to \(\mathcal{E}-coordinates\text{.}\)

classical adjoint

Given a matrix \(A\) the classical adjoint of \(A\), \(adj(A)\text{,}\) is the matrix whose \(i,j^{th}\) entry is the determinant of the \(j,i^{th}\) cofactor matrix of A, \(det(A_{j,i})\text{.}\)

cofactor matrix

Given a matrix \(A\) the \(i,j^{th}\)-cofactor matrix, \(A_{i,j}\text{,}\) is the matrix you get by deleting row \(i\) and column \(j\text{.}\) For example:

\begin{equation*} A=\left[ \begin{array}{rrr} 2 \amp 4 \amp 0\\ 0 \amp -1 \amp 7\\ -1\amp 0 \amp 5 \end{array} \right],\ A_{1,2}=\left[ \begin{array}{rr} 0 \amp 7\\ -1\amp 5 \end{array} \right]. \end{equation*}
column space

The column space of a matrix is the span of the columns of the matrix when viewed as vectors. If the matrix represents a transformation then the column space is equivalent to the image of the transformation.

consistent system

A consistent system is a system of equations, a vector equation, or matrix equation for which there is at least one solution. If no solution exists we say the system is inconsistent.

determinant

The determinant of a matrix \(A\), \(det(A)\) is a value which represent how \(A\) scales a vector space. So, a positive determinant greater than 1 indicates that the transformation is a stretch, a positive determinant less than 1 a compression, and a negative determinant indicates that the transformation is a reflection. A determinant of 0 tells us that the transformation is non-invertible (singular).

diagonal matrix

A matrix \(D\) is a diagonal matrix if all the entries off the main diagonal are zero, e.g.

\begin{equation*} D= \left[ \begin{array}{rrr} 7 \amp 0 \amp 0\\ 0 \amp -1 \amp 0\\ 0 \amp 0 \amp 5 \end{array} \right]. \end{equation*}
diagonalizable

A matrix \(A\) is called diagonalizable if there exists an invertible matrix \(P\) and diagonal matrix \(D\) such that

\begin{equation*} A=PDP^{-1}. \end{equation*}
dot product (inner product)

The dot product of vectors is geometrically the product of their magnitudes together with the cosine of the angle between them. Algebraically it is the sum of the products of their individual components.

\begin{equation*} \vec{v}\cdot\vec{w}=\sum_i v_i\, w_i=v_1\, w_1+v_2\, w_2+\cdots+v_k\, w_k \end{equation*}
Figure C.0.1. Geometric Interpretation of the Dot Product
eigenvalue and eigenvector

Given a matrix \(A\text{,}\) an eigenvector, \(\vec{x}\text{,}\) is a vector such that

\begin{equation*} A\vec{x}=\lambda\, \vec{x}, \end{equation*}

for some constant \(\lambda\) called the eigenvalue.

field

A field is a ring,

\begin{equation*} F\text{,} \end{equation*}

with the added property that

\begin{equation*} \forall a\in F, \exists a^{-1}\in F: a*a^{-1}=a^{-1}*a=1, \end{equation*}

i.e. all elements have multiplicative inverses.

identity matrix

A matrix \(I\) is the identity matrix if \(IA=AI=A\) for all matrices of appropriate size. Typically we use:

\begin{equation*} I_1=1,\ I_2 = \left[ \begin{array}{rr} 1 \amp 0 \\ 0 \amp 1 \end{array} \right],\ I_3 = \left[ \begin{array}{rrr} 1 \amp 0 \amp 0\\ 0 \amp 1 \amp 0\\ 0 \amp 0 \amp 1 \end{array} \right], \ldots \end{equation*}
inconsistent system

See definition of consistent system.

inverse matrix

The inverse of a matrix \(A\text{,}\) denoted \(A^{-1}\text{,}\) is a matrix such that

\begin{equation*} AA^{-1}=I \end{equation*}

which is called a right inverse,

\begin{equation*} A^{-1}A=I \end{equation*}

which is called a left inverse, or

\begin{equation*} AA^{-1}=A^{-1}A=I \end{equation*}

which is simply called the inverse.

kernel

The kernel of a linear transformation \(T:V\rightarrow W\) is the set of all \(\vec{v}\in V\) such that \(T(\vec{v})=\vec{0}\) in \(W\text{.}\) We abbreviate the kernel of \(T\) by \(ker(T)\text{.}\) The kernel of a transformation is also known as the null space.

linearly independent vectors

Linearly independent vectors is a ... are called linearly dependent.

linear transformation

A linear transformation is a map, \(T\text{,}\) from a vector space \(V\text{,}\) called the domain, to a vector space \(W\text{,}\)called the codomain, with the following properties:

  1. \(T(\vec{u}+\vec{v})=T(\vec{u})+T(\vec{v})\) for all \(\vec{u},\vec{v}\in V, and \)
  2. \(T(c\vec{v})=cT(\vec{v})\) for all scalars \(c\) and vectors \(\vec{v}\in V\text{.}\)
matrix

A matrix is a two dimensional array of numbers. If the dimensions are given as \(m\times n\) then there are \(m\) rows and \(n\) columns in the array. E.g.

\begin{equation*} \left[ \begin{array}{rrr} 4 \amp 3 \amp 0\\ 2 \amp 5 \amp 2 \end{array} \right] \end{equation*}

is a \(2\times 3\) matrix.

matrix equation

Given a matrix \(A\) and a vector \(\vec{b}\text{,}\) a matrix equation is an equation of the form

\begin{equation*} A\vec{x}=\vec{b} \end{equation*}

where \(\vec{x}\) is unknown.

matrix of a transformation

If \(T:\mathbb{R}^n\rightarrow\mathbb{R}^m\) is a linear transformation, then the matrix of the transformation is the unique matrix \(A\) of dimension \(m\times n\) such that

\begin{equation*} T(\vec{x})=A\vec{x}\ \mbox{for all}\ x\in\mathbb{R}^n. \end{equation*}

If \(\vec{e}_i\) is the \(i^{th}\) column of the identity matrix over \(\mathbb{R}^n\text{,}\) then the \(i^{th}\) column of \(A\) equals \(T(\vec{e}_i)\text{.}\)

null space

The null space of a matrix \(A\) is the set of all vectors \(\vec{x}\) such that

\begin{equation*} A\vec{x}=\vec{0}. \end{equation*}

If the matrix is viewed as a transformation then this is the same as the kernel of the transformation.

one-to-one (1-1)

A function or transformation \(T:V\rightarrow W\) is one-to-one (also written 1-1), if \(T(x)=T(y)\) implies \(x=y\) for all \(x,y\in V\text{.}\)

onto

A function or transformation \(T:V\rightarrow W\) is onto, if for all \(y\in W\) there exists \(x\in V\) such that \(y=T(x)\text{.}\)

orthogonal complement

The orthogonal complement of a set of vectors, \(V\) is the set of all vectors orthogonal to all the vectors in \(V\text{,}\) it is denoted by \(V^\perp\text{.}\)

orthogonal projection

An orthogonal projection of a vector onto a subspace is the best linear approximation of the vector in the subspace and minimizes the distance from the vector to the subspace.

Figure C.0.2. Vector Projection onto a Plane in \(\mathbb{R}^3\)
orthogonal vectors

Two vectors are orthogonal if the angle between them is a right angle.

pivot position

A pivot position in a matrix \(A\) is a position in \(A\) which corresponds to a leading 1 in the reduced echelon form of \(A\text{.}\) A column in which a pivot position occurs is called a pivot column.

ring

A ring is a collection of objects, \(R\text{,}\) together with a binary maps \(+:R\times R\rightarrow R\text{,}\) \(*:R\times R\rightarrow R\text{,}\) with the following properties:

  1. \(\forall a,b\in R: a+b\in R\ \mbox{and}\ a*b\in R\)
  2. \(\forall a,b,c\in R: (a+b)+c=a+(b+c)\ \mbox{and}\ (a*b)*c=a*(b*c)\)
  3. \(\forall a,b\in R: a+b=b+a\)
  4. \(\exists 0,\forall a\in R: a+0=0+a=a\)
  5. \(\forall a\in R, \exists -a\in R: a+(-a)=(-a)+a=0\)
  6. \(\forall a,b,c\in R: a*(b+c)=a*b+a*c\ \mbox{and}\ (b+c)*a=b*a+c*a\)

If in addition

\begin{equation*} \exists 1,\forall a\in R: 1*a=a*1=a\text{,} \end{equation*}

then it is a ring with unity (or a scale).

row space

The row space of a matrix is the span of the rows of the matrix when viewed as vectors.

similar matrices

Two matrices \(A\) and \(B\) are called similar matrices if there exists and invertible matrix \(P\) such that

\begin{equation*} B=PAP^{-1}. \end{equation*}
span of a set of vectors

The span of a set of vectors \(\left\{\vec{v}_1,\, \vec{v}_2,\, \ldots\, \vec{v}_n\right\}\) over a field \(\mathbb{F}\) is the set of all linear combinations of the form

\begin{equation*} a_1\vec{v}_1+a_2\vec{v}_2+a_3\vec{v}_3+\cdots+a_n\vec{v}_n \end{equation*}

where \(a_i\in\mathbb{F}\) for all \(i\text{.}\)

standard basis

The standard basis for \(\mathbb{R}^n\) consists of the \(n\)-columns of the \(n\times n\) identity matrix.

system of equations

A system of equations is a collection of equations such as:

\begin{equation*} \begin{array}{ccccccc} a_1 x_1 \amp + \amp a_2 x_2 \amp + \amp a_3 x_3 \amp = \amp d_1 \\ b_1 x_1 \amp + \amp b_2 x_2 \amp + \amp b_3 x_3 \amp = \amp d_2 \\ c_1 x_1 \amp + \amp c_2 x_2 \amp + \amp c_3 x_3 \amp = \amp d_3 \\ \end{array} \end{equation*}

where the values \(x_1,\, x_2,\) and \(x_3\) are unknown.

transpose of a matrix

The transpose of a matrix \(A\) is a matrix \(A^T\) in which each entry \(a_{ij}\) has been moved to position \(ji\text{,}\) in effect the entries have been reflected in main diagonal, e.g.

\begin{equation*} \left[ \begin{array}{cc} 1 \amp 3 \\ 0 \amp 2 \end{array} \right]^T = \left[ \begin{array}{cc} 1 \amp 0 \\ 3 \amp 2 \end{array} \right]. \end{equation*}
vector

A vector is a one dimensional array of numbers. E.g.

\begin{equation*} \left[ \begin{array}{r} 5\\ 6 \end{array} \right], \end{equation*}

or written horizontally as \(\left\lt 5,6\right\gt \text{.}\)

vector equation

Given a set of vectors \(\{\vec{v}_1,\, \vec{v}_2,\,\vec{v}_3,\,\ldots\, \vec{v}_n,\, \vec{x}\}\) and a corresponding set of unknown scalar coefficients \(\{a_1,\, a_2,\, a_3,\, \ldots\, a_n\}\) a vector equation is an expression of the form

\begin{equation*} a_1\vec{v}_1+a_3\vec{v}_3+a_3\vec{v}_3+\cdots a_n\vec{v}_n=\vec{x}. \end{equation*}
zero divisor

In a ring \(R\) a zero divisor is a non-zero element \(a\) for witch there exists another non-zero element \(b\) such that \(ab=0\text{.}\) For example:

\begin{equation*} \left[ \begin{array}{rr} 4 \amp 3\\ 8 \amp 6 \end{array} \right] \left[ \begin{array}{rr} 3 \amp -9\\ -4 \amp 12 \end{array} \right] = \left[ \begin{array}{rr} 0 \amp 0\\ 0 \amp 0 \end{array} \right] \end{equation*}