Kahibaro
Discord Login Register

Properties of determinants

Basic properties of determinants

In this chapter we collect the main algebraic rules that determinants satisfy. We will focus on square matrices over real numbers, but the properties hold more generally.

We will write $\det(A)$ or $|A|$ for the determinant of a square matrix $A$.

Linearity in a single row (or column)

A determinant is linear in each row separately when all other rows are fixed. Concretely, if $A$ is an $n\times n$ matrix and you look at, say, the $i$‑th row:

Exactly the same statements hold if you talk about columns instead of rows: determinants are linear in each column when the others are fixed.

A key consequence: if a row is the zero row, then the determinant is $0$, since multiplying that row by $0$ multiplies the determinant by $0$.

Effect of elementary row operations

Determinants interact in a simple way with the three elementary row operations used in Gaussian elimination.

Consider an $n\times n$ matrix $A$ and perform a single elementary row operation to obtain $B$.

  1. Swapping two rows $R_i \leftrightarrow R_j$:
    $$
    \det(B) = -\det(A).
    $$
    One row swap multiplies the determinant by $-1$. Two row swaps multiply by $(-1)^2=1$, so the sign flips only depend on the parity (odd/even) of the number of swaps.
  2. Multiplying a row by a nonzero scalar $c$:
    $$
    \det(B) = c \,\det(A).
    $$
  3. Adding a multiple of one row to another:
    Replace $R_i$ by $R_i + c R_j$ (with $i\neq j$). Then
    $$
    \det(B) = \det(A).
    $$
    This “row replacement” operation does not change the determinant.

Again, all of these have column versions if instead you perform the corresponding operation on columns.

These relationships allow you to track how the determinant changes while doing row reduction, which is important for computation but also for theoretical arguments about when a determinant is zero or nonzero.

Determinant of triangular matrices

A matrix is upper triangular if all entries below the main diagonal are zero, and lower triangular if all entries above the main diagonal are zero.

If $A$ is an $n\times n$ upper or lower triangular matrix, then the determinant is just the product of the diagonal entries:
$$
\det(A) = a_{11} a_{22} \cdots a_{nn}.
$$

As a special case, a diagonal matrix (all off‑diagonal entries zero) also has determinant equal to the product of its diagonal entries.

This is especially useful because, through row operations, you can often reduce a matrix to a triangular form and then read off the determinant (adjusting for the known effect of the row operations you used).

Determinant and invertibility

For a square matrix $A$, the determinant gives a precise condition for invertibility:

Related facts:

Multiplicative property

The determinant of a product of square matrices is the product of their determinants. If $A$ and $B$ are both $n\times n$ matrices, then
$$
\det(AB) = \det(A)\,\det(B).
$$

This has several immediate consequences:

The multiplicative property is one of the most important algebraic features of determinants, and it underlies many results in linear algebra.

Determinant of the transpose

The determinant is unchanged when you transpose a matrix. For any $n\times n$ matrix $A$,
$$
\det(A^T) = \det(A).
$$

Because of this, any property stated in terms of rows has a corresponding version in terms of columns, and vice versa. For example, having linearly independent rows is equivalent to having linearly independent columns, since both are equivalent to $\det(A)\neq 0$.

Behavior under scalar multiplication

If $c$ is a scalar and $A$ is an $n\times n$ matrix, then multiplying every entry of $A$ by $c$ multiplies the determinant by $c^n$:
$$
\det(cA) = c^n \det(A).
$$

Reason: multiplying the entire matrix by $c$ is the same as multiplying each of the $n$ rows by $c$, and each such operation multiplies the determinant by $c$.

This is different from multiplying a single row by $c$, which multiplies the determinant only by $c$.

Determinant and row/column structure

Using linearity and the effect of operations, we get several useful structural properties.

Zero determinant from repeated or dependent rows/columns

This is another expression of the link between determinants and linear dependence.

Additivity with respect to row (or column) sums

If a matrix has a row that is a sum of two vectors, you can split the determinant into a sum of determinants:

Suppose the $i$‑th row of $A$ is $r_i = u + v$. Then
$$
\det(A) =
\det(\text{matrix with row } i = u)
+
\det(\text{matrix with row } i = v),
$$
with all other rows left as in $A$.

This is just the linearity property specialized to a single row, but it is often used to decompose determinants into simpler pieces.

Cofactor expansion invariance

Cofactor expansion (also called Laplace expansion) expresses $\det(A)$ using minors and cofactors along a chosen row or column. A key property is:

Although the method of computing the determinant belongs to another chapter, we use here just the principle: the value of $\det(A)$ is independent of which row or column you use for the expansion.

This invariance is often used to choose a row or column with many zeros to simplify calculations.

Determinant and orthogonal (or orthonormal) matrices

If $Q$ is an $n\times n$ matrix whose columns form an orthonormal set (orthogonal matrix), then
$$
\det(Q) = \pm 1.
$$

This means orthogonal transformations preserve lengths and areas/volumes up to possible reflection (sign change). The sign of $\det(Q)$ distinguishes between “orientation‑preserving” ($+1$) and “orientation‑reversing” ($-1$) transformations, though the geometric interpretation is treated elsewhere.

Summary of main determinant properties

For an $n\times n$ matrix $A$:

These properties will be repeatedly used in later chapters involving eigenvalues, eigenvectors, and diagonalization.

Views: 8

Comments

Please login to add a comment.

Don't have an account? Register now!