Table of Contents
What a Determinant Is (Conceptual View)
In this chapter we focus on determinants as a numerical quantity associated with a square matrix. For an $n \times n$ matrix $A$, its determinant is a single number, written in one of several equivalent ways:
- $\det(A)$
- $|A|$
- For a $2 \times 2$ or $3 \times 3$ matrix, sometimes by putting the entries between vertical bars, for example
$$
\begin{vmatrix}
a & b \
c & d
\end{vmatrix}.
$$
Key roles of the determinant:
- It tells you whether a matrix is invertible: $\det(A) \neq 0$ means $A$ is invertible; $\det(A) = 0$ means $A$ is not invertible.
- It measures how the linear transformation defined by $A$ changes areas (in 2D), volumes (in 3D), and higher-dimensional analogues.
- Its sign (positive or negative) tells you whether the transformation preserves or reverses orientation (for instance, whether a 2D transformation keeps counterclockwise order counterclockwise, or flips it).
Here we develop ways to compute determinants and highlight their most important structural properties. Applications and deeper uses (for example, in eigenvalues or volume computations) are treated in other chapters.
Determinants of $2 \times 2$ and $3 \times 3$ Matrices
Determinant of a $2 \times 2$ matrix
For a $2 \times 2$ matrix
$$
A =
\begin{bmatrix}
a & b \\
c & d
\end{bmatrix},
$$
the determinant is defined by
$$
\det(A) = ad - bc.
$$
This formula is basic and will be reused often. In particular:
- If $ad - bc \neq 0$, then $A$ is invertible.
- If $ad - bc = 0$, then $A$ is singular (non-invertible).
Determinant of a $3 \times 3$ matrix
For a $3 \times 3$ matrix
$$
A =
\begin{bmatrix}
a & b & c \\
d & e & f \\
g & h & i
\end{bmatrix},
$$
one direct formula is
$$
\det(A)
= aei + bfg + cdh - ceg - bdi - afh.
$$
You do not need to memorize this exact ordering if you learn systematic methods (cofactor expansion, row operations) that generalize to larger matrices, but this explicit formula is useful for quick $3 \times 3$ calculations.
An alternative mnemonic for $3 \times 3$ is the “diagonal method” (sometimes called Sarrus’ rule), which works only for $3 \times 3$ matrices, not higher dimensions:
- Write the first two columns again to the right of the matrix.
- Add the products of the three downward diagonals.
- Subtract the products of the three upward diagonals.
Formally, however, we will rely on the more general definition using minors and cofactors.
Minors, Cofactors, and Cofactor Expansion
To define determinants of larger matrices, we use the concepts of minors and cofactors.
Minors
For an $n \times n$ matrix $A = [a_{ij}]$, the minor $M_{ij}$ is the determinant of the matrix obtained by:
- deleting row $i$,
- deleting column $j$.
The resulting $(n-1) \times (n-1)$ determinant is $M_{ij}$.
Example structure (not computing the value yet):
If
$$
A =
\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9
\end{bmatrix},
$$
then $M_{12}$ is the determinant of the matrix obtained by deleting row 1 and column 2:
$$
M_{12} =
\begin{vmatrix}
4 & 6 \\
7 & 9
\end{vmatrix}.
$$
Cofactors
The cofactor $C_{ij}$ of the entry $a_{ij}$ is defined by
$$
C_{ij} = (-1)^{i+j} M_{ij}.
$$
The factor $(-1)^{i+j}$ alternates sign in a checkerboard pattern:
- First row: $+ \ - \ + \ - \ \dots$
- Second row: $- \ + \ - \ + \ \dots$
- and so on.
For a $3 \times 3$ matrix, the signs of cofactors look like
$$
\begin{bmatrix}
+ & - & + \\
- & + & - \\
+ & - & +
\end{bmatrix}.
$$
Cofactor expansion (Laplace expansion)
The determinant of an $n \times n$ matrix can be computed by expanding along any row or any column.
If we expand along row $i$:
$$
\det(A) = a_{i1} C_{i1} + a_{i2} C_{i2} + \dots + a_{in} C_{in}.
$$
If we expand along column $j$:
$$
\det(A) = a_{1j} C_{1j} + a_{2j} C_{2j} + \dots + a_{nj} C_{nj}.
$$
The crucial facts:
- You may choose any row or column; the result is the same.
- Strategically, you choose a row or column with many zeros to simplify the work, because terms where $a_{ij} = 0$ contribute nothing.
This definition allows you to define $\det(A)$ for all $n \ge 1$ recursively:
- For $1 \times 1$, $\det([a_{11}]) = a_{11}$.
- For $n \times n$, define $\det(A)$ in terms of determinants of $(n-1) \times (n-1)$ minors via cofactor expansion.
Determinant and Row Operations
Determinants interact in specific ways with elementary row operations. These properties make it possible to compute determinants efficiently using row reduction (Gaussian elimination) while keeping track of how the operations change the determinant.
Consider a square matrix $A$ and an elementary row operation that produces a new matrix $B$.
- Row swap: Swapping two rows multiplies the determinant by $-1$.
- If $B$ is obtained from $A$ by swapping two rows, then
$$
\det(B) = -\det(A).
$$ - Row scaling: Multiplying a row by a nonzero scalar $k$ multiplies the determinant by $k$.
- If $B$ is obtained from $A$ by multiplying row $i$ by $k$, then
$$
\det(B) = k \cdot \det(A).
$$ - Row replacement (row addition): Adding a multiple of one row to another row does not change the determinant.
- If $B$ is obtained from $A$ by replacing row $i$ with
row i + k · row j(with $i \ne j$), then
$$
\det(B) = \det(A).
$$
These facts also hold if you perform the operations on columns instead of rows.
Using these rules:
- You can row-reduce a matrix to an upper triangular form and then compute the determinant easily.
- While you are row-reducing, you must keep track of any row swaps and row scalings, because they change the determinant in known ways.
Determinants and Triangular Matrices
A particularly simple and useful case concerns triangular matrices.
A matrix $A$ is upper triangular if all entries below the main diagonal are zero. It is lower triangular if all entries above the diagonal are zero.
If $A$ is an $n \times n$ upper or lower triangular matrix, then
$$
\det(A) = a_{11} a_{22} \cdots a_{nn},
$$
the product of the diagonal entries.
This gives a fast way to compute determinants:
- Use row operations to convert $A$ into an upper triangular matrix $U$.
- Track the effect of the row operations on the determinant.
- Multiply the diagonal entries of $U$ and adjust for row swaps and scalings.
Example of the logic:
- Suppose $U$ is obtained from $A$ only by:
- adding multiples of one row to another (no change to determinant),
- and swapping rows $s$ times (each swap multiplies the determinant by $-1$).
- Then
$$
\det(A) = (-1)^s \, \det(U) = (-1)^s (u_{11} u_{22} \cdots u_{nn}).
$$
If you also scale rows, you must divide by the product of scaling factors at the end to recover $\det(A)$ from $\det(U)$.
Key Algebraic Properties of Determinants
Beyond computation, determinants satisfy important algebraic properties that connect them to matrix multiplication, inverses, and transposes.
Multiplicative property
For any two $n \times n$ matrices $A$ and $B$,
$$
\det(AB) = \det(A)\,\det(B).
$$
Consequences:
- If $\det(A) \neq 0$ and $\det(B) \neq 0$, then $\det(AB) \neq 0$.
- If $\det(A) = 0$ or $\det(B) = 0$, then $\det(AB) = 0$.
- For an invertible matrix $A$,
$$
\det(A^{-1}) = \frac{1}{\det(A)}.
$$
Indeed, since $AA^{-1} = I$ and $\det(I) = 1$, we have
$$
\det(A)\,\det(A^{-1}) = \det(I) = 1.
$$
Determinant of the transpose
For any square matrix $A$,
$$
\det(A^T) = \det(A).
$$
This means:
- Properties derived from row operations on $A$ have column analogues, because row operations on $A$ correspond to column operations on $A^T$.
- The determinant is symmetric with respect to rows and columns.
Determinant of the identity and scalar multiples
Let $I_n$ be the $n \times n$ identity matrix. Then
$$
\det(I_n) = 1.
$$
For a scalar $k$ and an $n \times n$ matrix $A$,
$$
\det(kA) = k^n \det(A).
$$
Reason: Multiplying every entry of $A$ by $k$ effectively scales every row (or column) by $k$, and scaling one row multiplies the determinant by $k$. Doing this to all $n$ rows multiplies by $k^n$.
Linearity in a single row (or column)
The determinant is a linear function of each row separately, if the other rows are held fixed. More precisely, if you treat one row as a vector variable:
- $\det(\dots, \alpha \mathbf{r}_1 + \beta \mathbf{s}_1, \dots)
= \alpha \det(\dots, \mathbf{r}_1, \dots) + \beta \det(\dots, \mathbf{s}_1, \dots)$, - and similarly for columns.
This is compatible with:
- Row addition (which leaves the determinant unchanged).
- Row scaling (which multiplies the determinant by the same scalar).
Determinants and Invertibility
Determinants give a compact criterion for invertibility. For an $n \times n$ matrix $A$:
- $A$ is invertible $\iff \det(A) \neq 0$.
- $A$ is singular (non-invertible) $\iff \det(A) = 0$.
Geometric intuition:
- If $\det(A) = 0$, the linear transformation associated with $A$ “flattens” space into a lower-dimensional subspace. The image has dimension strictly less than $n$, so you cannot “reverse” the transformation, and $A$ has no inverse.
- If $\det(A) \neq 0$, the transformation is one-to-one and onto (a bijection) on $\mathbb{R}^n$, so an inverse linear transformation exists.
This criterion is equivalent to various other conditions related to rank, pivots, and solutions of linear systems, but those are discussed in other chapters.
Determinants and Volume/Orientation (Geometric View)
The determinant has a geometric meaning in terms of volume (or area in 2D, or hyper-volume in higher dimensions).
Consider a linear transformation $T:\mathbb{R}^n \to \mathbb{R}^n$ represented by an $n \times n$ matrix $A$.
- Volume scaling: For any region $S$ in $\mathbb{R}^n$ with finite volume,
$$
\text{Vol}(T(S)) = |\det(A)| \cdot \text{Vol}(S).
$$
So: - If $|\det(A)| = 2$, areas/volumes are doubled.
- If $|\det(A)| = 0.5$, they are halved.
- If $\det(A) = 0$, all volumes are collapsed to zero (space is flattened into a lower dimension).
- Orientation:
- If $\det(A) > 0$, the transformation preserves orientation (e.g., in 2D, an ordered triple of points that is counterclockwise stays counterclockwise).
- If $\det(A) < 0$, the transformation reverses orientation (it includes a reflection-like effect).
In $\mathbb{R}^2$, the absolute value of the determinant of the $2 \times 2$ matrix formed by two vectors gives the area of the parallelogram they span. In $\mathbb{R}^3$, the absolute value of the determinant of the $3 \times 3$ matrix with columns as three vectors gives the volume of the parallelepiped they span.
The geometric perspective connects determinants to topics like change of variables in integrals and Jacobians, which are explored in later courses.
Determinant via Permutations (Advanced Definition)
The most general and theoretical definition of the determinant uses permutations. This definition underlies all the properties we have used but is not usually the most efficient for computations.
Let $A = [a_{ij}]$ be an $n \times n$ matrix. A permutation $\sigma$ of $\{1, 2, \dots, n\}$ is a bijection from the set to itself. Each permutation $\sigma$ has a sign $\text{sgn}(\sigma)$:
- $\text{sgn}(\sigma) = +1$ if $\sigma$ is an even permutation,
- $\text{sgn}(\sigma) = -1$ if $\sigma$ is an odd permutation.
Then the determinant of $A$ is defined by
$$
\det(A) = \sum_{\sigma} \text{sgn}(\sigma)\, a_{1,\sigma(1)} a_{2,\sigma(2)} \cdots a_{n,\sigma(n)},
$$
where the sum is over all $n!$ permutations $\sigma$ of $\{1,2,\dots,n\}$.
This formula:
- Matches the earlier definitions (via recursive expansion and row operations).
- Explains why determinants are multilinear and alternating in the rows (or columns).
- Makes the multiplicative property $\det(AB) = \det(A)\det(B)$ plausible, though its proof is nontrivial.
In practice, you rarely compute determinants directly from the permutation formula for $n > 3$, but it is useful conceptually and theoretically, especially in more advanced work.
Practical Strategies for Computing Determinants
For actual calculations, you typically choose an efficient method depending on the matrix.
Common strategies:
- $2 \times 2$: Use the direct formula $ad - bc$.
- $3 \times 3$:
- Use the explicit $3 \times 3$ formula, or
- Use cofactor expansion along a row or column with zeros, or
- Use row operations to make the matrix triangular and multiply diagonal entries, adjusting for row swaps and scalings.
- Larger matrices:
- Prefer row-reduction to an upper triangular form:
- Use row replacement operations freely (no change in determinant).
- Keep a count of row swaps (each multiplies the determinant by $-1$).
- Keep track of any row scalings (divide out their product at the end).
- Avoid naive cofactor expansion on a dense matrix: it grows exponentially in work.
- Sparse matrices or ones with many zeros:
- Use cofactor expansion along the row or column with the most zeros.
These techniques combine the structural properties of determinants with practical computational considerations.
Determinants as a Tool
Determinants are not just a way to attach a number to a matrix. They are a tool that:
- Detects invertibility and singularity.
- Describes volume scaling and orientation in linear transformations.
- Connects to eigenvalues via the characteristic polynomial $\det(A - \lambda I)$.
- Appears in formulas for matrix inverses (adjugate matrices).
- Plays a role in solving systems of linear equations through Cramer’s rule (though not usually efficient for large systems).
Later chapters build on these ideas, but the core of working with determinants is:
- understanding how to compute them reliably,
- knowing how they change under basic operations,
- and recognizing what a nonzero or zero determinant tells you about a matrix.