Linear Algebra and its Applications Chapter 3 - Determinants
by Arpon Sarker
Introduction
A determinant is a number assigned to a square array of numbers in a certain way. The was considered as early as 1683 by Japanese mathematician Seki Takakazu and independently 10 years later by Gottfried Leibniz. This was about 160 years before a separate theory of matrices developed. Gabriel Cramer and Augustin-Louis Cauchy used determinants in analytical geometry.
Introduction to Determinants
The first definition for determinant is an equation that lets a square matrix $A$ be invertible and is derived from doing row reduction on a generic matrix. (GO THROUGH THIS)
\[\begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{bmatrix} \\ \sim \begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{11}a_{21} & a_{11}a_{22} & a_{11}a_{23} \\ a_{11}a_{31} & a_{11}a_{32} & a_{11}a_{33}\end{bmatrix} \\ \sim \begin{bmatrix}a_{11} & a_{12} & a_{13} \\ 0 & a_{11}a_{22}-a_{12}a_{21} & a_{11}a_{23}-a_{13}a_{21} \\ 0 & a_{11}a_{32}-a_{12}a_{31} & a_{11}a_{33}-a_{13}a_{31}\end{bmatrix} \\ \sim \begin{bmatrix}a_{11} & a_{12} & a_{13} \\ 0 & a_{11}a_{22}-a_{12}a_{21} & a_{11}a_{23}-a_{13}a_{21} \\ 0 & 0 & a_{11}\Delta\end{bmatrix}\]We have the formula for the determinant!!!! $\Delta = a_{11}a_{22}a_{33}+a_{12}a_{23}a_{31}+a_{13}a_{21}a_{32}-a_{11}a_{23}a_{32}-a_{12}a_{21}a_{33}-a_{13}a_{22}a_{31}$
(TRY IT FOR THE 2x2 CASE)
If $A$ is invertible, $\Delta$ must be nonzero. The converse is true.
Properties of Determinants
\[\textrm{det}\space A = (-1)^r \cdot (\textrm{product of pivots in U}) = (-1)^r \textrm{det}\space U, \textrm{when A is invertible} \\ \textrm{det}\space A = 0, \textrm{when A is not invertible}\]This is because once you reduce $A$ to echelon form $U$, it is in triangular form so the determinant is just the products of the diagonal/pivots. If $A$ is not invertible, one of the diagonal entries is zero.
Can use cofactor expansions for sparse matrices.
Cramer’s Rule, Volume, and Linear Transformations
Cramer’s rule can be used to study how the solution of $A\textbf{x} = \textbf{b}$ is affected by changes in the entries of $\textbf{b}$. The formula is inefficient for hand calculations.
Definitions
Determinant:
For $n\geq 2$, the determinant of an n x n matrix $A = [a_{ij}]$ is the sum of $n$ terms of the form $\pm a_{1j} \textrm{det}\space A_{1j}$ \(\textrm{det}\space A = \sum_{j=1}^n(-1)^{1+j}a_{1j}\textrm{det}\space A_{1j}\)
$(i-j)$-cofactor:
$C_{ij}=(-1)^{i+j}\textrm{det}\space A_{ij}$
Theorems
Theorem 1:
The determminant of an n x n matrix can be computed by a cofactor expansion across any row or down any column.
(PROOF?)
Theorem 2:
If $A$ is a triangular matrix, then $\textrm{det}\space A$ is the product of entries on the main diagonal of $A$.
(TRIVIAL TO PROVE)
Theorem 3: Row Operations
Let $A$ be a square matrix.
a. If a multiple of one row is added to another row to produce a matrix $B$,then $\textrm{det}\space B = \textrm{det}\space A$
b. If two rows of $A$ are interchanged to produce $B$, then $\textrm{det}\space B = - \textrm{det}\space A$
c. If one row of A is multiplied by $k$ to produce $B$, then $\textrm{det}\space B = k\cdot \textrm{det}\space A$
(PROOF alongside thm. 6)
Theorem 4:
A square matrix $A$ is invertible if and only if $\textrm{det}\space A \neq 0$
Theorem 5:
If $A$ is an n x n matrix, then $\textrm{det}\space A^T = \textrm{det}\space A$
This means that each statement talking about rows can be substituted for columns
(PROOF)
Theorem 6: Multiplicative Property
If $A$ and $B$ are n x n matrices, then $\textrm{det}\space AB = (\textrm{det}\space A)(\textrm{det}\space B)$
(PROOF)
Theorem 7: Cramer’s Rule
Let $A$ be an invertible n x n matrix. For any $\textbf{b}\in\R^n$, the unique solution $\textbf{b}$ of $A\textbf{x}=\textbf{b}$ has entries given by
\(x_i = \frac{\textrm{det}\space A_i(\textbf{b})}{\textrm{det}\space A}, \quad i=1,2,\ldots,n\)
(PROOF)
Theorem 8: An Inverse Formula
Let $A$ be an invertible n x n matrix. Then \(A^{-1} = \frac{1}{\textrm{det}\space A}\textrm{adj}\space A\)
(WHY?)
Theorem 9:
| If $A$ is a 2 x 2 matrix, the area of a parallelogram determined by the columns of $A$ is $ | \textrm{det }A | $. If $A$ is a 3 x 3 matrix, the volume of a parallelopiped determined by the columns of $A$ is $ | \textrm{det }A | $. |
(PROOF)
Theorem 10:
Let $T: \R^2 \rightarrow \R^2$ be the linear transformation determined by a 2 x 2 matrix $A$. If $S$ is a parallelogram in $\R^2$, then \(\{\textrm{area of T(S)}\} = |\det A|\cdot{\textrm{area of S}}\) The same can be done in 3 dimensions for the volume of $S$
(PROOF)
Applications
- Analytical Geometry
- Laplace Tranforms (used in electrical engineering and control theory)
Numerical Notes
- It would be impossible to calculate a 25 x 25 determinant by cofactor expansion. It takes $n!$ multiplications and faster methods are required. (Look at 2.)
- Most computer programs compute the determinant using the method in Properties of Determinants. An n x n determinant using row operations requires $2n^3/3$ arithmetic operations.
- The formula in theorem 8 is usefule for theoretical calculations such as deducing the properties of the inverse without calculating it. However, row reducing to find the inverse is much more efficient if needed.