



The Inverse of a Matrix
Section 2.2
The Basics

Group 7:
Jordon
Andrew
Jason

Linear algebra is perhaps the most powerful and useful math course you can ever take in your entire academic career. Resting at the heart of this course is something called a matrix. A matrix is a rectangular array of numbers organized in rows and columns and encased in brackets. Matrices are very useful due to the fact that they can be easily manipulated. In this page we will explore how to find the inverse of a matrix and its uses.
Much like normal numbers we use the notation A^{1} to denote the inverse of matrix A. Some important things to remember about inverse matrices is they are not commutative, and a full generalization is possible only if the matrices you are using a square. (meaning they have the same number of rows and columns, an n x n matrix)

3 x 3 Identity Matrix 
An n x n matrix (A) is said to be invertible if there is an n x n matrix (C) such that CA= I and AC= I where I is the n x n identity matrix. An identity matrix is a square matrix with ones on the diagonal and zeros elsewhere.
Basically A^{1} A= I and A A^{1} = I where A is an invertible matrix and A^{1} is the inverse of A. A matrix is said to be a singular matrix if it is noninvertible. A matrix is a nonsingular matrix if it is an invertible matrix.

A simple formula for finding the inverse of a 2 x 2 matrix is given by Theorem 4:


We call the quantity ad  bc the determinant of the matrix. ( det A = ad  bc ) A 2 x 2 matrix is invertible if and only if (iff) its determinant does not equal 0.


Theorem 5 reveals something else useful about the inverse of matrices. Theorem 5 states that if matrix A is invertible then the equation Ax = b has a unique solution, x. We can find this solution by x = A^{1} b. The following example demonstrates this usefulness of this equation. The following proof will help prove theorem 5 by proving 1) that the solution exists and 2) this solution is unique.


1) let x = A^{1} b, plug this in to check A(x) = A A^{1} b= Ib = b
2) We will prove this portion of theorem 5 by contradiction. Remember A(x + y) = Ax = Ay AB does not equal BA, so assume there are two different solutions x does not equal y with Ax = b and Ay = b, but then Ax  Ay = 0, A(xy) = 0 Multiply by A^{1} , A^{1} 0 = A^{1} A(xy) = I(xy) = xy xy = 0, therefore x = y. However we said in the beginning that x does not equal y, this is a contradiction.
Example of solving Ax=b by using the inverse of A:


More useful properties of inverse matrices are revealed in Theorem 6. This theorem states:


If A is an invertible matrix, then A^{1}is invertible and (A^{1}) 1 = A
If A and B are n x n invertible matrices, then so is AB and the inverse of AB is the product of the inverses of A and B in the reverse order. (AB)1 = B1 A^{1}
If A is an invertible matrix, then so is A^{T} , and the inverse of A^{T} is the transpose of A^{1} . That is (A^{T})1 = (A^{1})^{T}

Are you skeptical at all with theorem 6? Well, incase you are, here are the proofs for each part.
To Prove 1: we need to find matrix C so that A^{1} C = I and C A^{1} = I We already know that these equations will still hold true if we put A in place of C. (see above) Thus A^{1} is invertible, and A is its inverse.
To Prove 2: we first must calculate (AB)( B1 A^{1} ) = A(BB1 ) A^{1} = AI A^{1} = A A^{1} = I a similar calculation shows that (B1 A^{1} )(AB) = I.
(A^{1} )^{T} A^{T} =(A A^{1} )^{T} = I^{T} = I. And thus: A^{T} (A^{1} )^{T} = I^{T} = I . Thus we have proved that A^{T} is invertible, and its inverse is (A^{1} )^{T}
Also, it is useful to remember that the product of n x n invertible matrices is invertible, and the inverse is the product of their inverses in the reverse order.


Elementary Matrices
The usefulness of matrices continues to expand with the introduction of elementary matrices. An elementary matrix is a matrix that is obtained by performing a single elementary row operation to an identity matrix. An elementary row operation is the process of either (1) replacing one row of a matrix with the sum of itself and a multiple of another row (2) Interchanging two rows (3) Multiplying all entries in a row by a nonzero constant. If an elementary row operation is performed on an n x n matrix A, the resulting matrix can be written as E A, where the n x n matrix E is created by performing the same row operation on I_{m} . The following example demonstrates this concept:


It should also be noted that each elementary matrix E is invertible. The inverse of E is the elementary matrix of the same type that transforms E back into I.




Finally theorem 7 gives us a way to visualize an inverse matrix and helps us develop a method of finding inverse matrices. Theorem 7 says that an n x n matrix, called A is invertible iff (if and only if) A is row equivalent to I_{n} , and any sequence of elementary row operations that reduces A to I_{n} also transforms I_{n} into A^{1}.
An Algorithm for Finding A^{1}
Say if we placed a matrix A and its identity matrix I next to each other and formed an augmented matrix. Row operations done to this matrix would produce the same results

on both A and I. The following is an algorithm for finding A^{1} or the inverse of matrix A. First row reduce the augmented matrix [ A I ] . If I and A are row equivalent then the matrix [ A I ] is row equivalent to [ I A^{1} ]. If not the A does not have an inverse.



Another View of Matrix Inversion
Finally, this section gives us another way to view inverse matrices. This new way to view matrices also introduces a new trick to us. We see that the "super augmented" matrix [ A I ] which is matrix A and its identity matrix row reduces to the matrix [ I A^{1} ] Now how any why does this work!? Well, in general the matrix [ A B ] row reduces to [ I A^{1} B ]


If [ A B ] row reduces to [ I x ] then x = A^{1} b, Ax = b
If [ A b_{1} b_{2} ] row reduces to [ I x_{1} x_{2} ] then Ax = b_{1} , Ax_{1} = b_{1} , Ax_{2} = b_{2}
Where b_{n} are the columns of matrix B.

Section 2.3
The Invertible Matrix Theorem
Let A be a square n x n matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true or all false.


(a) A is an invertible matrix.
(b) A is row equivalent to the n x n identity matrix.
(c) A has n pivot positions.
(d) The equation Ax=0 has only the trivial solution.
(e) The columns of A form a linearly independent set.
(f) The linear transformation x>Ax is onetoone.
(g) The equation Ax=b has at least one solution for each b in R^{n}.
(h) The columns of A span R^{n}.
(i) The linear transformation x>Ax maps R^{n} onto R^{n}.
(j) There is an n x n matrix C such that CA=I.
(k) There is an n x n matrix D such that AD=I.
(l) A^{T} is an invertible matrix.

Notation note: If the truth of statement (a) always implies that statement (j) is true, we say that (a) implies (j) and write (a) => (j).
PROOF:

If (a) is true, meaning A is an invertible matrix, then A^{1} works for C in statement (j) of the theorem, so (a)=>(j). Next, (j)=>(d) and (d)=>(c) as previously shown. Furthermore, if A is a square, n x n, matrix and has n pivots must lie on the main diagonal line, each being one down and one right from the one before it. In this case the reduced echelon form of A is I_{n}. Which means (c)=>(b). Also, (b)=>(a) by the previous theorem. This completes the explanation for the circle diagram.
(a)=>(k) because A^{1} works for D. Also, (k)=>(g) and (g)=>(a) as previously shown. This shows that (k) and (g) are linked to the circle. Furthermore, (g), (h), and (i) are equivalent for any matrix. Therefore, (h) and (i) are linked through (g) to the circle.
Since (d) is linked to the circle so are (e) and (f), because (d), (e), and (f) are equivalent for any matrix A. In conclusion, (a)=>(l) by the theorem previously stated and (l)=>(a) by the same theorem with A and AT interchanged. This explains the diagram.


A linear transformation T: R^{n} • R^{n} is said to be invertible if there exists a function S:


R^{n} • R^{n} such that:
S(T(x)) = x for all x in R^{n}
T(S(x)) = x for all x in R^{n}

This theorem shows that if S exists, it is unique and must be a linear transformation. S is the inverse of T and is written as T^{1}.

Invertible Linear Transformations Theorem
Let T: R^{n} • R^{n} be a linear transformation and let A be the standard matrix for T. Thus T is invertible if and only if A is and invertible matrix. In that case, the linear transformation S is given by S(x) = A^{1}x is the unique function satisfying (1) and (2).
PROOF:
If we suppose that T is invertible, then (2) shows that T is onto R^{n}, because if b is in R^{n} and x=S(b), then T(x)= T(S(b))=b, so each b is in the range of T. Therefore, A is invertible, by the Invertible Matrix Theorem, statement (i).
Now suppose that A is invertible. Let S(x)=A^{1}x. Then, S is a linear transformation and S satisfies (1) and (2).


i.e. S(T(x))=S(Ax)=A^{1}(Ax)=x
T is invertible.



