Table of Contents
Overview
This chapter introduces two of the main objects in linear algebra: vectors and matrices. You will learn what they look like, how we write them, and how to perform the basic operations that treat them as building blocks for more advanced ideas like systems of equations, transformations, determinants, and eigenvalues (all covered in later chapters).
We work over the real numbers in this chapter; complex numbers are introduced elsewhere.
Vectors
A vector is an ordered list of numbers. Each number in the list is called a component (or coordinate) of the vector.
For example:
- In two dimensions: $v = \begin{bmatrix} 3 \\ -1 \end{bmatrix}$
- In three dimensions: $w = \begin{bmatrix} 2 \\ 0 \\ 5 \end{bmatrix}$
- In $n$ dimensions: $x = \begin{bmatrix} x_1 \\ x_2 \\ \dots \\ x_n \end{bmatrix}$
We usually think of such vectors as column vectors, meaning they are written vertically. A row vector would be written horizontally, such as
$$
[3 \;\; -1] \quad\text{or}\quad \begin{bmatrix} 3 & -1 \end{bmatrix}.
$$
Unless stated otherwise, we will use column vectors.
Vector notation and dimension
A typical notation for vectors uses boldface or arrows; for example $\mathbf{v}$ or $\vec{v}$. In this course we mostly write them as column brackets with letters like $v$ or $x$.
The dimension of a vector is the number of its components:
- $ \begin{bmatrix} 1 \\ 4 \end{bmatrix}$ is a vector in $\mathbb{R}^2$ (2-dimensional).
- $ \begin{bmatrix} -1 \\ 0 \\ 2 \\ 7 \end{bmatrix}$ is in $\mathbb{R}^4$ (4-dimensional).
Here $\mathbb{R}^n$ means “the set of all $n$‑component real column vectors”.
Equality of vectors
Two vectors are equal if
- they have the same dimension, and
- all corresponding components are equal.
Example:
$$
\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}
=
\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix},
\qquad
\begin{bmatrix} 1 \\ 2 \end{bmatrix} \neq
\begin{bmatrix} 1 \\ 2 \\ 0 \end{bmatrix}.
$$
Basic Operations with Vectors
Vector addition
You can add two vectors of the same dimension by adding their components one by one.
If
$$
u = \begin{bmatrix} u_1 \\ u_2 \\ \vdots \\ u_n \end{bmatrix},
\quad
v = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix},
$$
then
$$
u + v = \begin{bmatrix}
u_1 + v_1 \\
u_2 + v_2 \\
\vdots \\
u_n + v_n
\end{bmatrix}.
$$
Example:
$$
\begin{bmatrix} 1 \\ 3 \end{bmatrix}
+
\begin{bmatrix} 4 \\ -2 \end{bmatrix}
=
\begin{bmatrix} 1+4 \\ 3+(-2) \end{bmatrix}
=
\begin{bmatrix} 5 \\ 1 \end{bmatrix}.
$$
Vector addition is defined only when the vectors have the same dimension.
Scalar multiplication
A scalar is just an ordinary real number (like $2$, $-3.5$, $\pi$).
To multiply a vector by a scalar $c$, multiply each component of the vector by $c$:
$$
c \cdot \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}
=
\begin{bmatrix} c v_1 \\ c v_2 \\ \vdots \\ c v_n \end{bmatrix}.
$$
Example:
$$
3 \cdot \begin{bmatrix} -2 \\ 5 \end{bmatrix}
=
\begin{bmatrix} -6 \\ 15 \end{bmatrix}.
$$
Linear combinations (preview)
A linear combination of vectors $v_1, v_2, \dots, v_k$ is any expression of the form
$$
c_1 v_1 + c_2 v_2 + \dots + c_k v_k,
$$
where $c_1, \dots, c_k$ are scalars. Linear combinations are built out of just vector addition and scalar multiplication. They will appear repeatedly throughout linear algebra.
Matrices
A matrix is a rectangular array of numbers arranged in rows and columns.
A matrix with $m$ rows and $n$ columns is called an $m \times n$ matrix (“$m$ by $n$”).
Example:
$$
A =
\begin{bmatrix}
1 & 0 & -2 \\
3 & 4 & 5
\end{bmatrix}
$$
has $2$ rows and $3$ columns, so $A$ is a $2 \times 3$ matrix.
Entries and indexing
The numbers in a matrix are called its entries or elements.
We usually write $a_{ij}$ for the entry of a matrix $A$ in row $i$ and column $j$:
- $i$ is the row index,
- $j$ is the column index.
If
$$
A =
\begin{bmatrix}
1 & 0 & -2 \\
3 & 4 & 5
\end{bmatrix},
$$
then:
- $a_{11} = 1$ (row 1, column 1),
- $a_{12} = 0$ (row 1, column 2),
- $a_{23} = 5$ (row 2, column 3).
More generally, an $m \times n$ matrix can be written symbolically as
$$
A =
\begin{bmatrix}
a_{11} & a_{12} & \dots & a_{1n} \\
a_{21} & a_{22} & \dots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{m1} & a_{m2} & \dots & a_{mn}
\end{bmatrix}.
$$
Special types of matrices
Some shapes of matrices appear so often that they have special names.
- Row matrix (row vector): A matrix with one row and $n$ columns, e.g.
$$
\begin{bmatrix} 2 & -1 & 0 \end{bmatrix}
$$
is a \times 3$ matrix. - Column matrix (column vector): A matrix with $m$ rows and one column, e.g.
$$
\begin{bmatrix} 2 \ -1 \ 0 \end{bmatrix}
$$
is a \times 1$ matrix. Column vectors are just special cases of matrices. - Square matrix: An $n \times n$ matrix (same number of rows and columns), such as
$$
\begin{bmatrix} 1 & 2 \ 3 & 4 \end{bmatrix}.
$$
Many advanced ideas (determinants, eigenvalues) apply specifically to square matrices. - Zero matrix: A matrix where every entry is zero. For example,
$$
0_{2\times 3} =
\begin{bmatrix}
0 & 0 & 0 \
0 & 0 & 0
\end{bmatrix}.
$$ - Identity matrix: A special square matrix, denoted $I_n$, with ones on the main diagonal and zeros elsewhere. For $n = 3$,
$$
I_3 = \begin{bmatrix}
1 & 0 & 0 \
0 & 1 & 0 \
0 & 0 & 1
\end{bmatrix}.
$$
The identity matrix plays a role similar to the number $1$ for ordinary multiplication. Its detailed properties will be explored when we discuss matrix operations and inverses.
The transpose of a matrix
The transpose of a matrix $A$, written $A^T$, is formed by turning rows into columns (or columns into rows).
If
$$
A =
\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6
\end{bmatrix},
$$
then its transpose is
$$
A^T =
\begin{bmatrix}
1 & 4 \\
2 & 5 \\
3 & 6
\end{bmatrix}.
$$
In terms of entries, if $B = A^T$, then $b_{ij} = a_{ji}$. So the $(i,j)$–entry of $B$ is the $(j,i)$–entry of $A$.
For a column vector $v$, its transpose $v^T$ is a row vector:
$$
v =
\begin{bmatrix}
v_1 \\ v_2 \\ \vdots \\ v_n
\end{bmatrix}
\quad\Rightarrow\quad
v^T = \begin{bmatrix} v_1 & v_2 & \dots & v_n \end{bmatrix}.
$$
Basic Operations with Matrices
We start with operations that do not yet involve matrix multiplication (which is treated more fully in the “Matrix operations” chapter).
Matrix equality
Two matrices $A$ and $B$ are equal if:
- they have the same size (same number of rows and columns), and
- all corresponding entries match: $a_{ij} = b_{ij}$ for every $i$ and $j$.
Example:
$$
\begin{bmatrix}
1 & 2 \\
0 & -1
\end{bmatrix}
=
\begin{bmatrix}
1 & 2 \\
0 & -1
\end{bmatrix},
\quad
\begin{bmatrix}
1 & 2
\end{bmatrix}
\neq
\begin{bmatrix}
1 \\ 2
\end{bmatrix}.
$$
Matrix addition
You can add two matrices only if they have the same size.
If $A$ and $B$ are both $m \times n$ matrices, then their sum $C = A + B$ is also $m \times n$, with entries
$$
c_{ij} = a_{ij} + b_{ij}.
$$
Example:
$$
A =
\begin{bmatrix}
1 & 3 \\
2 & -1
\end{bmatrix},
\quad
B =
\begin{bmatrix}
4 & 0 \\
-2 & 5
\end{bmatrix}.
$$
Then
$$
A + B =
\begin{bmatrix}
1+4 & 3+0 \\
2+(-2) & -1+5
\end{bmatrix}
=
\begin{bmatrix}
5 & 3 \\
0 & 4
\end{bmatrix}.
$$
Matrix addition is done entry by entry, just like vector addition.
Scalar multiplication of matrices
To multiply a matrix by a scalar $c$, multiply every entry of the matrix by $c$.
If $A$ is $m \times n$ with entries $a_{ij}$, then $cA$ is $m \times n$ with entries
$$
(cA)_{ij} = c \cdot a_{ij}.
$$
Example:
$$
2 \cdot
\begin{bmatrix}
1 & -3 \\
0 & 4
\end{bmatrix}
=
\begin{bmatrix}
2 & -6 \\
0 & 8
\end{bmatrix}.
$$
Combining addition and scalar multiplication
These two operations work together naturally. A common expression is a linear combination of matrices such as
$$
2A - 3B = 2A + (-3)B.
$$
Example:
$$
A =
\begin{bmatrix}
1 & 0 \\
2 & 1
\end{bmatrix},
\quad
B =
\begin{bmatrix}
3 & -1 \\
0 & 4
\end{bmatrix}.
$$
Compute $2A - 3B$:
$$
2A =
\begin{bmatrix}
2 & 0 \\
4 & 2
\end{bmatrix},
\quad
3B =
\begin{bmatrix}
9 & -3 \\
0 & 12
\end{bmatrix},
$$
so
$$
2A - 3B =
\begin{bmatrix}
2-9 & 0-(-3) \\
4-0 & 2-12
\end{bmatrix}
=
\begin{bmatrix}
-7 & 3 \\
4 & -10
\end{bmatrix}.
$$
Matrices and Vectors Together
Vectors can be viewed as special matrices:
- An $n$–component column vector is an $n \times 1$ matrix.
- An $n$–component row vector is a $1 \times n$ matrix.
Treating vectors as matrices allows us to describe many operations more uniformly.
Stacking vectors as columns
Often, we build a matrix by placing several column vectors side by side.
If
$$
v_1 =
\begin{bmatrix}
1 \\ 0 \\ 2
\end{bmatrix},
\quad
v_2 =
\begin{bmatrix}
-1 \\ 4 \\ 3
\end{bmatrix},
\quad
v_3 =
\begin{bmatrix}
0 \\ 5 \\ -2
\end{bmatrix},
$$
we can form a $3 \times 3$ matrix
$$
A = [\, v_1 \; v_2 \; v_3 \,] =
\begin{bmatrix}
1 & -1 & 0 \\
0 & 4 & 5 \\
2 & 3 & -2
\end{bmatrix}.
$$
Here $v_1$, $v_2$, $v_3$ are called the columns of $A$.
This viewpoint is very important:
- columns of a matrix often represent vectors we are interested in,
- combinations of these columns represent linear combinations of vectors.
A similar construction can be done with row vectors, placing them on top of each other as rows of a matrix.
Dimension and Size Constraints
When working with vectors and matrices, it is crucial to respect their sizes.
- Vectors must have the same dimension to be added.
- Matrices must have the same shape to be added or subtracted.
- A scalar can multiply any vector or matrix; the size does not change.
- A column vector in $\mathbb{R}^n$ is an $n \times 1$ matrix; a row vector is $1 \times n$.
Later, when matrix multiplication and related operations are introduced, additional size conditions will appear (for example, matching “inner dimensions” for multiplication). Being careful about dimensions is one of the key habits in linear algebra.
Algebraic Properties (Without Proof)
Vector addition and scalar multiplication, and matrix addition and scalar multiplication, share several familiar algebraic properties. For matrices $A, B, C$ of the same size and scalars $c, d$:
- Commutativity of addition:
$$
A + B = B + A.
$$ - Associativity of addition:
$$
(A + B) + C = A + (B + C).
$$ - Distributive properties:
$$
c(A + B) = cA + cB,
$$
$$
(c + d)A = cA + dA.
$$
There is also a zero matrix $0$ of the same size as $A$ such that $A + 0 = A$, and for every $A$ there is an additive inverse $-A$ (all entries negated) such that $A + (-A) = 0$.
These properties justify treating matrices and vectors in many ways like ordinary numbers, but with extra attention to dimensions and the fact that some operations (like matrix multiplication) behave differently.
Summary
In this chapter, you learned:
- Vectors as ordered lists of numbers (typically written as column vectors).
- Basic operations with vectors: addition and scalar multiplication.
- Matrices as rectangular arrays of numbers, with rows, columns, and entries $a_{ij}$.
- The size of a matrix: $m \times n$ (rows by columns).
- Special matrices: row/column matrices, square matrices, the zero matrix, and the identity matrix.
- The transpose operation, which swaps rows and columns.
- Basic matrix operations: equality, addition, subtraction, scalar multiplication.
- How vectors can be regarded as special matrices, and how multiple vectors can be assembled into a single matrix.
These ideas form the foundation for the next chapters, where you will use vectors and matrices to solve systems of linear equations, perform matrix multiplication, and study more advanced concepts such as determinants, eigenvalues, and eigenvectors.