Readings / Linear Algebra / 1.6 — Determinants: Geometric Meaning

1.6 — Determinants: Geometric Meaning

1.6 — Determinants: Geometric Meaning

Date: 2026-03-01 | Block: 1 — Linear Algebra

The idea in plain English

The determinant is a single number that answers: "by what factor does this transformation scale area?" If you put a 1×1 square through the transformation, the determinant tells you the area of the resulting shape. It also tells you whether the transformation flips space over.

The intuition

Take a unit square — a 1×1 square sitting at the origin. Apply a matrix transformation to it. The square becomes a parallelogram (it gets squished and stretched). The area of that parallelogram is |det(A)|.

Before:         After (det=6):        After (det=0):
*────*           *──────────*          * (just a point or line)
|    |    →      /          /    →
*────*          *──────────*
area = 1         area = 6              area = 0

The sign of the determinant tells you about orientation: - Positive: space kept its "handedness" (like a normal scaling or rotation) - Negative: space got flipped (like a reflection — a clock would run backwards) - Zero: space got completely squashed to a lower dimension — information was lost

The math

For a 2×2 matrix:

A = [ a  b ]
    [ c  d ]

det(A) = ad − bc

This formula computes the signed area of the parallelogram formed by the two column vectors.

Three key properties:

det(A·B) = det(A) · det(B)    ← composing transforms multiplies scale factors
det(I)   = 1                  ← identity does nothing, scales by 1
det(A⁻¹) = 1 / det(A)        ← inverse undoes the scaling

A worked example

A = [ 3  0 ]   det = 3·2 − 0·0 = 6
    [ 0  2 ]

This matrix stretches 3× horizontally and 2× vertically. Area scales by 3×2 = 6. ✓

A = [ 1  2 ]   det = 1·4 − 2·2 = 0
    [ 2  4 ]

From Topic 1.5, this matrix is rank 1 — it squashes the plane to a line. A line has zero area. det = 0. ✓

The grand unification (det = 0)

All of these say exactly the same thing — if you know one, you know all:

det(A) = 0
  ⟺  matrix is not invertible (singular)
  ⟺  rank < n
  ⟺  null space is non-trivial
  ⟺  columns are linearly dependent
  ⟺  transformation squashes to lower dimension

Why this matters for ML

Invertibility check: to solve linear regression analytically, you need to invert XᵀX. If det(XᵀX) = 0, it's singular and can't be inverted — your features are linearly dependent (multicollinear).

Normalising flows are generative models that transform simple distributions (like Gaussians) into complex ones. At each step, the probability density changes by 1/|det(Jacobian)|. The determinant tracks how probability mass spreads or concentrates — this is the entire mathematical foundation of flow-based models.

Rotations always have det = 1 — they preserve area perfectly. This is why rotation matrices are special in ML: they transform data without distorting distances.

The one thing to remember

det(A) = how much the transformation scales area. Zero means space collapsed — information was destroyed.

← Previous 1.5 — Rank, Column Space & Null Space Next → 1.7 — Inverse Matrices and When They Exist