People have been telling you forever that linear algebra and matrices are useful for modeling, simulations and computer graphics, but it has been a little non-obvious. This tutorial will start to draw the lines by re-introducing you functions (a bit more rigor than you may remember from high school) and linear functions/transformations in particular.
You probably remember how to multiply matrices from high school, but didn't know why or what it represented. This tutorial will address this. You'll see that multiplying two matrices can be view as the composition of linear transformations.
You can use a transformation/function to map from one set to another, but can you invert it? In other words, is there a function/transformation that given the output of the original mapping, can output the original input (this is much clearer with diagrams).
This tutorial addresses this question in a linear algebra context. Since matrices can represent linear transformations, we're going to spend a lot of time thinking about matrices that represent the inverse transformation.
We've talked a lot about inverse transformations abstractly in the last tutorial. Now, we're ready to actually compute inverses. We start from "documenting" the row operations to get a matrix into reduced row echelon form and use this to come up with the formula for the inverse of a 2x2 matrix. After this we define a determinant for 2x2, 3x3 and nxn matrices.
In the last tutorial on matrix inverses, we first defined what a determinant is and gave several examples of computing them. In this tutorial we go deeper. We will explore what happens to the determinant under several circumstances and conceptualize it in several ways.