Table of Contents
Basic properties of determinants
In this chapter we collect the main algebraic rules that determinants satisfy. We will focus on square matrices over real numbers, but the properties hold more generally.
We will write $\det(A)$ or $|A|$ for the determinant of a square matrix $A$.
Linearity in a single row (or column)
A determinant is linear in each row separately when all other rows are fixed. Concretely, if $A$ is an $n\times n$ matrix and you look at, say, the $i$‑th row:
- If you multiply the $i$‑th row by a scalar $c$, then
$$
\det(\text{matrix with row } i \text{ replaced by } c\,\text{row } i)
= c \,\det(A).
$$ - If you replace the $i$‑th row by a sum of two rows $r_i = u + v$, then
$$
\det(\text{matrix with row } i = u+v)
=
\det(\text{matrix with row } i = u)
+
\det(\text{matrix with row } i = v),
$$
with all other rows kept the same.
Exactly the same statements hold if you talk about columns instead of rows: determinants are linear in each column when the others are fixed.
A key consequence: if a row is the zero row, then the determinant is $0$, since multiplying that row by $0$ multiplies the determinant by $0$.
Effect of elementary row operations
Determinants interact in a simple way with the three elementary row operations used in Gaussian elimination.
Consider an $n\times n$ matrix $A$ and perform a single elementary row operation to obtain $B$.
- Swapping two rows $R_i \leftrightarrow R_j$:
$$
\det(B) = -\det(A).
$$
One row swap multiplies the determinant by $-1$. Two row swaps multiply by $(-1)^2=1$, so the sign flips only depend on the parity (odd/even) of the number of swaps. - Multiplying a row by a nonzero scalar $c$:
$$
\det(B) = c \,\det(A).
$$ - Adding a multiple of one row to another:
Replace $R_i$ by $R_i + c R_j$ (with $i\neq j$). Then
$$
\det(B) = \det(A).
$$
This “row replacement” operation does not change the determinant.
Again, all of these have column versions if instead you perform the corresponding operation on columns.
These relationships allow you to track how the determinant changes while doing row reduction, which is important for computation but also for theoretical arguments about when a determinant is zero or nonzero.
Determinant of triangular matrices
A matrix is upper triangular if all entries below the main diagonal are zero, and lower triangular if all entries above the main diagonal are zero.
If $A$ is an $n\times n$ upper or lower triangular matrix, then the determinant is just the product of the diagonal entries:
$$
\det(A) = a_{11} a_{22} \cdots a_{nn}.
$$
As a special case, a diagonal matrix (all off‑diagonal entries zero) also has determinant equal to the product of its diagonal entries.
This is especially useful because, through row operations, you can often reduce a matrix to a triangular form and then read off the determinant (adjusting for the known effect of the row operations you used).
Determinant and invertibility
For a square matrix $A$, the determinant gives a precise condition for invertibility:
- $A$ is invertible (has an inverse matrix $A^{-1}$) if and only if $\det(A) \neq 0$.
- $A$ is singular (non‑invertible) if and only if $\det(A) = 0$.
Related facts:
- If $A$ is invertible, then the determinant of its inverse is
$$
\det(A^{-1}) = \frac{1}{\det(A)}.
$$ - If $A$ has a row (or column) that is a linear combination of the others, then $\det(A) = 0$. Equivalently, if two rows (or columns) are equal or proportional, the determinant is zero. This connects the determinant to linear independence of rows/columns.
Multiplicative property
The determinant of a product of square matrices is the product of their determinants. If $A$ and $B$ are both $n\times n$ matrices, then
$$
\det(AB) = \det(A)\,\det(B).
$$
This has several immediate consequences:
- For any positive integer $k$,
$$
\det(A^k) = \bigl(\det(A)\bigr)^k.
$$ - If $A$ is invertible,
$$
1 = \det(I) = \det(AA^{-1}) = \det(A)\,\det(A^{-1}),
$$
which is another way to see $\det(A^{-1}) = 1/\det(A)$.
The multiplicative property is one of the most important algebraic features of determinants, and it underlies many results in linear algebra.
Determinant of the transpose
The determinant is unchanged when you transpose a matrix. For any $n\times n$ matrix $A$,
$$
\det(A^T) = \det(A).
$$
Because of this, any property stated in terms of rows has a corresponding version in terms of columns, and vice versa. For example, having linearly independent rows is equivalent to having linearly independent columns, since both are equivalent to $\det(A)\neq 0$.
Behavior under scalar multiplication
If $c$ is a scalar and $A$ is an $n\times n$ matrix, then multiplying every entry of $A$ by $c$ multiplies the determinant by $c^n$:
$$
\det(cA) = c^n \det(A).
$$
Reason: multiplying the entire matrix by $c$ is the same as multiplying each of the $n$ rows by $c$, and each such operation multiplies the determinant by $c$.
This is different from multiplying a single row by $c$, which multiplies the determinant only by $c$.
Determinant and row/column structure
Using linearity and the effect of operations, we get several useful structural properties.
Zero determinant from repeated or dependent rows/columns
- If two rows (or two columns) of $A$ are equal, then
$$
\det(A) = 0.
$$
Swapping those equal rows would change the sign of the determinant but not the matrix, forcing the determinant to be $.
- More generally, if one row is a linear combination of other rows, then the determinant is $0$. The same holds for columns.
This is another expression of the link between determinants and linear dependence.
Additivity with respect to row (or column) sums
If a matrix has a row that is a sum of two vectors, you can split the determinant into a sum of determinants:
Suppose the $i$‑th row of $A$ is $r_i = u + v$. Then
$$
\det(A) =
\det(\text{matrix with row } i = u)
+
\det(\text{matrix with row } i = v),
$$
with all other rows left as in $A$.
This is just the linearity property specialized to a single row, but it is often used to decompose determinants into simpler pieces.
Cofactor expansion invariance
Cofactor expansion (also called Laplace expansion) expresses $\det(A)$ using minors and cofactors along a chosen row or column. A key property is:
- You may expand along any row or any column, and you obtain the same determinant.
Although the method of computing the determinant belongs to another chapter, we use here just the principle: the value of $\det(A)$ is independent of which row or column you use for the expansion.
This invariance is often used to choose a row or column with many zeros to simplify calculations.
Determinant and orthogonal (or orthonormal) matrices
If $Q$ is an $n\times n$ matrix whose columns form an orthonormal set (orthogonal matrix), then
$$
\det(Q) = \pm 1.
$$
This means orthogonal transformations preserve lengths and areas/volumes up to possible reflection (sign change). The sign of $\det(Q)$ distinguishes between “orientation‑preserving” ($+1$) and “orientation‑reversing” ($-1$) transformations, though the geometric interpretation is treated elsewhere.
Summary of main determinant properties
For an $n\times n$ matrix $A$:
- Linearity in each row/column separately.
- Effect of row operations:
- Swap two rows: determinant changes sign.
- Multiply a row by $c$: determinant multiplied by $c$.
- Add multiple of one row to another: determinant unchanged.
- Triangular/diagonal matrices: determinant is the product of diagonal entries.
- Invertibility: $\det(A)\neq 0$ iff $A$ is invertible; $\det(A^{-1}) = 1/\det(A)$.
- Multiplicativity: $\det(AB) = \det(A)\det(B)$.
- Transpose: $\det(A^T) = \det(A)$.
- Scalar multiple: $\det(cA) = c^n \det(A)$.
- Zero determinant if rows/columns are linearly dependent, equal, or contain a zero row/column.
- Cofactor expansion gives the same value along any row or column.
- Orthogonal matrices have determinant $+1$ or $-1$.
These properties will be repeatedly used in later chapters involving eigenvalues, eigenvectors, and diagonalization.