Kahibaro
Discord Login Register

Determinants

What a Determinant Is (Conceptual View)

In this chapter we focus on determinants as a numerical quantity associated with a square matrix. For an $n \times n$ matrix $A$, its determinant is a single number, written in one of several equivalent ways:

Key roles of the determinant:

Here we develop ways to compute determinants and highlight their most important structural properties. Applications and deeper uses (for example, in eigenvalues or volume computations) are treated in other chapters.

Determinants of $2 \times 2$ and $3 \times 3$ Matrices

Determinant of a $2 \times 2$ matrix

For a $2 \times 2$ matrix
$$
A =
\begin{bmatrix}
a & b \\
c & d
\end{bmatrix},
$$
the determinant is defined by
$$
\det(A) = ad - bc.
$$

This formula is basic and will be reused often. In particular:

Determinant of a $3 \times 3$ matrix

For a $3 \times 3$ matrix
$$
A =
\begin{bmatrix}
a & b & c \\
d & e & f \\
g & h & i
\end{bmatrix},
$$
one direct formula is
$$
\det(A)
= aei + bfg + cdh - ceg - bdi - afh.
$$

You do not need to memorize this exact ordering if you learn systematic methods (cofactor expansion, row operations) that generalize to larger matrices, but this explicit formula is useful for quick $3 \times 3$ calculations.

An alternative mnemonic for $3 \times 3$ is the “diagonal method” (sometimes called Sarrus’ rule), which works only for $3 \times 3$ matrices, not higher dimensions:

  1. Write the first two columns again to the right of the matrix.
  2. Add the products of the three downward diagonals.
  3. Subtract the products of the three upward diagonals.

Formally, however, we will rely on the more general definition using minors and cofactors.

Minors, Cofactors, and Cofactor Expansion

To define determinants of larger matrices, we use the concepts of minors and cofactors.

Minors

For an $n \times n$ matrix $A = [a_{ij}]$, the minor $M_{ij}$ is the determinant of the matrix obtained by:

The resulting $(n-1) \times (n-1)$ determinant is $M_{ij}$.

Example structure (not computing the value yet):

If
$$
A =
\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9
\end{bmatrix},
$$
then $M_{12}$ is the determinant of the matrix obtained by deleting row 1 and column 2:
$$
M_{12} =
\begin{vmatrix}
4 & 6 \\
7 & 9
\end{vmatrix}.
$$

Cofactors

The cofactor $C_{ij}$ of the entry $a_{ij}$ is defined by
$$
C_{ij} = (-1)^{i+j} M_{ij}.
$$

The factor $(-1)^{i+j}$ alternates sign in a checkerboard pattern:

For a $3 \times 3$ matrix, the signs of cofactors look like
$$
\begin{bmatrix}
+ & - & + \\

Cofactor expansion (Laplace expansion)

The determinant of an $n \times n$ matrix can be computed by expanding along any row or any column.

If we expand along row $i$:
$$
\det(A) = a_{i1} C_{i1} + a_{i2} C_{i2} + \dots + a_{in} C_{in}.
$$

If we expand along column $j$:
$$
\det(A) = a_{1j} C_{1j} + a_{2j} C_{2j} + \dots + a_{nj} C_{nj}.
$$

The crucial facts:

This definition allows you to define $\det(A)$ for all $n \ge 1$ recursively:

Determinant and Row Operations

Determinants interact in specific ways with elementary row operations. These properties make it possible to compute determinants efficiently using row reduction (Gaussian elimination) while keeping track of how the operations change the determinant.

Consider a square matrix $A$ and an elementary row operation that produces a new matrix $B$.

  1. Row swap: Swapping two rows multiplies the determinant by $-1$.
    • If $B$ is obtained from $A$ by swapping two rows, then
      $$
      \det(B) = -\det(A).
      $$
  2. Row scaling: Multiplying a row by a nonzero scalar $k$ multiplies the determinant by $k$.
    • If $B$ is obtained from $A$ by multiplying row $i$ by $k$, then
      $$
      \det(B) = k \cdot \det(A).
      $$
  3. Row replacement (row addition): Adding a multiple of one row to another row does not change the determinant.
    • If $B$ is obtained from $A$ by replacing row $i$ with row i + k · row j (with $i \ne j$), then
      $$
      \det(B) = \det(A).
      $$

These facts also hold if you perform the operations on columns instead of rows.

Using these rules:

Determinants and Triangular Matrices

A particularly simple and useful case concerns triangular matrices.

A matrix $A$ is upper triangular if all entries below the main diagonal are zero. It is lower triangular if all entries above the diagonal are zero.

If $A$ is an $n \times n$ upper or lower triangular matrix, then
$$
\det(A) = a_{11} a_{22} \cdots a_{nn},
$$
the product of the diagonal entries.

This gives a fast way to compute determinants:

Example of the logic:

If you also scale rows, you must divide by the product of scaling factors at the end to recover $\det(A)$ from $\det(U)$.

Key Algebraic Properties of Determinants

Beyond computation, determinants satisfy important algebraic properties that connect them to matrix multiplication, inverses, and transposes.

Multiplicative property

For any two $n \times n$ matrices $A$ and $B$,
$$
\det(AB) = \det(A)\,\det(B).
$$

Consequences:

Indeed, since $AA^{-1} = I$ and $\det(I) = 1$, we have
$$
\det(A)\,\det(A^{-1}) = \det(I) = 1.
$$

Determinant of the transpose

For any square matrix $A$,
$$
\det(A^T) = \det(A).
$$

This means:

Determinant of the identity and scalar multiples

Let $I_n$ be the $n \times n$ identity matrix. Then
$$
\det(I_n) = 1.
$$

For a scalar $k$ and an $n \times n$ matrix $A$,
$$
\det(kA) = k^n \det(A).
$$

Reason: Multiplying every entry of $A$ by $k$ effectively scales every row (or column) by $k$, and scaling one row multiplies the determinant by $k$. Doing this to all $n$ rows multiplies by $k^n$.

Linearity in a single row (or column)

The determinant is a linear function of each row separately, if the other rows are held fixed. More precisely, if you treat one row as a vector variable:

This is compatible with:

Determinants and Invertibility

Determinants give a compact criterion for invertibility. For an $n \times n$ matrix $A$:

Geometric intuition:

This criterion is equivalent to various other conditions related to rank, pivots, and solutions of linear systems, but those are discussed in other chapters.

Determinants and Volume/Orientation (Geometric View)

The determinant has a geometric meaning in terms of volume (or area in 2D, or hyper-volume in higher dimensions).

Consider a linear transformation $T:\mathbb{R}^n \to \mathbb{R}^n$ represented by an $n \times n$ matrix $A$.

  1. Volume scaling: For any region $S$ in $\mathbb{R}^n$ with finite volume,
    $$
    \text{Vol}(T(S)) = |\det(A)| \cdot \text{Vol}(S).
    $$
    So:
    • If $|\det(A)| = 2$, areas/volumes are doubled.
    • If $|\det(A)| = 0.5$, they are halved.
    • If $\det(A) = 0$, all volumes are collapsed to zero (space is flattened into a lower dimension).
  2. Orientation:
    • If $\det(A) > 0$, the transformation preserves orientation (e.g., in 2D, an ordered triple of points that is counterclockwise stays counterclockwise).
    • If $\det(A) < 0$, the transformation reverses orientation (it includes a reflection-like effect).

In $\mathbb{R}^2$, the absolute value of the determinant of the $2 \times 2$ matrix formed by two vectors gives the area of the parallelogram they span. In $\mathbb{R}^3$, the absolute value of the determinant of the $3 \times 3$ matrix with columns as three vectors gives the volume of the parallelepiped they span.

The geometric perspective connects determinants to topics like change of variables in integrals and Jacobians, which are explored in later courses.

Determinant via Permutations (Advanced Definition)

The most general and theoretical definition of the determinant uses permutations. This definition underlies all the properties we have used but is not usually the most efficient for computations.

Let $A = [a_{ij}]$ be an $n \times n$ matrix. A permutation $\sigma$ of $\{1, 2, \dots, n\}$ is a bijection from the set to itself. Each permutation $\sigma$ has a sign $\text{sgn}(\sigma)$:

Then the determinant of $A$ is defined by
$$
\det(A) = \sum_{\sigma} \text{sgn}(\sigma)\, a_{1,\sigma(1)} a_{2,\sigma(2)} \cdots a_{n,\sigma(n)},
$$
where the sum is over all $n!$ permutations $\sigma$ of $\{1,2,\dots,n\}$.

This formula:

In practice, you rarely compute determinants directly from the permutation formula for $n > 3$, but it is useful conceptually and theoretically, especially in more advanced work.

Practical Strategies for Computing Determinants

For actual calculations, you typically choose an efficient method depending on the matrix.

Common strategies:

  1. $2 \times 2$: Use the direct formula $ad - bc$.
  2. $3 \times 3$:
    • Use the explicit $3 \times 3$ formula, or
    • Use cofactor expansion along a row or column with zeros, or
    • Use row operations to make the matrix triangular and multiply diagonal entries, adjusting for row swaps and scalings.
  3. Larger matrices:
    • Prefer row-reduction to an upper triangular form:
      • Use row replacement operations freely (no change in determinant).
      • Keep a count of row swaps (each multiplies the determinant by $-1$).
      • Keep track of any row scalings (divide out their product at the end).
    • Avoid naive cofactor expansion on a dense matrix: it grows exponentially in work.
  4. Sparse matrices or ones with many zeros:
    • Use cofactor expansion along the row or column with the most zeros.

These techniques combine the structural properties of determinants with practical computational considerations.

Determinants as a Tool

Determinants are not just a way to attach a number to a matrix. They are a tool that:

Later chapters build on these ideas, but the core of working with determinants is:

Views: 11

Comments

Please login to add a comment.

Don't have an account? Register now!