If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Linear algebra vocabulary refresher

Linear algebra in multivariable calculus

We just finished talking about all the different ways we can visualize multivariable functions. As a final stop before we start to focus more on the calculus part of multivariable calculus, let's discuss a uniquely important tool that we'll use throughout the course. At its core, linear algebra gives us a way to describe higher dimensions compactly. This is indispensable in calculus beyond a single variable, or in other words, beyond the first dimension. A foundation in linear algebra helps us really see the beauty in multivariable calculus.
This article won't go into too much detail, because there's already a whole course for linear algebra on Khan Academy. Instead, it will present a review of key terms and concepts that will appear again and again in multivariable calculus. If you don't recognize a concept, read on to see a brief discussion of it and its multivariable applications. If you still don't understand the idea, there are links to Khan Academy content that will help you learn.

Vectors

Vectors are the building blocks of everything multivariable. We use them when we want to represent a coordinate in higher-dimensional space or, more generally, to write a list of anything. These lists could be functions that can describe moving coordinates or even transformations of the entire input space.
There are lots of notations for vectors, but here are the three we'll use most in this course. Sometimes we write a little arrow on top of variables that are vectors to distinguish them, although people also often omit it.
v=(1,2,3)v=1ı^+2ȷ^+3k^v=[123]
All the examples above are of 3D vectors, but a vector can be n dimensional in general, where n is any positive integer.
The first notation is the familiar triple of points, which is common because it generalizes to all dimensions. The second notation, on the other hand, only works in 2D and 3D. The symbol ı^ (pronounced i-hat) is the unit x vector, so ı^=(1,0,0). Similarly, ȷ^=(0,1,0) and k^=(0,0,1). The third notation is matrix notation and can be generalized to as many dimensions as we want by adding more rows.
The two basic operations of vectors are addition and scalar multiplication.
a=(0,1,2)b=(1,2,3)a+b=(0+1,1+2,2+3)=(1,1,1)2a=(20,2(1),2(2))=(0,2,4)2a+b=(0+1,2+2,4+3)=(1,0,1)
In multivariable calculus, we might write (f0(t),f1(t),f2(t)) to denote a vector whose x-component is the function f0 evaluated at t, whose y-component is f1(t), and whose z-component is f2(t). There are new ways to think about vectors like this that we'll see as we progress further into multivariable calculus.
Learn more about vectors here.
Beyond addition and scalar multiplication, there are two more important operations between vectors.

Dot products

The dot product is written vw and gives a number. This number is a measure of how long the vectors are and also how much the two vectors point in the same direction. If the vectors are directly perpendicular, then the dot product is zero. The dot product is largest when the two vectors are parallel.
The definition of the dot product is given below, where θ is the angle between the two vectors. In words, it says that the dot product between two vectors is the product of their lengths times the cosine of the angle between them.
vw=vwcos(θ)
The more common way to find the dot product requires knowing the components of the vectors. For two vectors v=(v0,,vn) and w=(w0,,wn), we have:
vw=v0w0++vnwn.
Here are a few examples.
v=4ı^1ȷ^+2k^w=1ı^+2ȷ^+4k^vw=41+(1)2+24=10(4,5)(5,4)=4(5)+54=0(0,5,4)(9,2,3)=09+52+(4)3=2
Notice how (4,5)(5,4)=0. If we graph these two vectors, we see that they are indeed perpendicular. This is the work of the cos(θ) in the definition, because cos(π2)=0.
Learn more about dot products here.

Cross products

The dot product is defined in any number of dimensions, but the cross product only works if both vectors are three dimensional. The cross product is written v×w and gives a vector.
This vector has two special properties: it is perpendicular to both v and w, and its magnitude corresponds to the area of the parallelogram formed by v and w.
Image credit: "Cross product parallelogram," by Wikipedia user Acdx.
If two vectors are parallel, then their cross product is zero. The cross product is largest when the two vectors are perpendicular. In this sense, the cross product complements the dot product. The dot product is zero for perpendicular vectors, while the cross product is zero for parallel vectors.
Learn more about the connection between the dot and cross product here.
The magnitude of the cross product is |v×w|=vwsin(θ), where θ is again the angle between the two vectors. In words, it says that the magnitude is the product of the lengths of the two vectors times the sine of the angle between them.
The formula for the cross product of two vectors is quite a mouthful and should not be memorized. Think of the cross product always as perpendicular to its two input vectors and with area equal to that of the parallelogram they form.
v=(v1,v2,v3)w=(w1,w2,w3)v×w=(v2w3v3w2,v3w1v1w3,v1w2v2w1)
Notice, for example, that ı^×ȷ^=k^.
Learn more about cross products here.

Matrices

When we want to talk about lots of vectors all at once, we can use a matrix. Think of each column as its own vector, except now we can represent three vectors with one mathematical object. (Sometimes we think of the rows as vectors instead of the columns.)
[147258369]
Matrices have a concept of addition and multiplication, although it's far more useful to think of these operations as transformations than as the formulas we use to compute them.
Learn more about matrices as transformations here.
Nonetheless, we will need to know how to compute with matrices in order to solve problems. Always remember that matrices still represent transformations encoded with numbers.
Learn more about computing with matrices here.

Inverse matrices

Given that we can multiply matrices, there is one matrix that stands out. We call it I, the identity matrix.
I=[100010001]
The identity matrix I isn't only defined for 3×3 matrices. In general, I is the n×n matrix with ones on its diagonal and zeros everywhere else.
The matrix I is special because it is the only matrix that makes IA=AI=A for any n×n matrix A. The identity matrix I is similar to the number 1 when we're in matrix-land.
We know that we can take any nonzero real number x and multiply it by x1 to get xx1=x1x=1. A natural question might be, "Can we do the same for matrices?"
The answer is that usually we can. If we have a matrix A and it has an inverse A1, then we call A invertible. The inverse of A must satisfy AA1=A1A=I.
Taking a step back from all the symbols, what is an inverse matrix? What does it mean to multiply a matrix by its inverse?
If you haven't seen matrices as transformations before, look at this first. For example, the matrix I is the identity matrix whose transformation is to leave everything the same.
Here's an example of inverse matrices:
The matrix 2I stretches everything out by a factor of 2, and the matrix 0.5I squeezes everything in by a factor of 2. When we apply 2I and 0.5I successively, we stretch then squeeze by equal amounts, so the net transformation is just the identity transformation I. Therefore, (2I)1=0.5I.
Now, an arbitrary matrix T has a corresponding transformation. We start with I, the "do nothing" matrix. Then T(I) means start from "do nothing," then transform by T. If T is invertible, then T1(TI) means start from "do nothing," then transform by T, then perform the exact transformation that undoes T. This is again I, the "do nothing" transformation.
In light of the transformation mindset, the inverse of a matrix A is the matrix whose transformation exactly undoes the effect of A.

Determinants

Let's return to the question of when a matrix has an inverse. Not all matrices are invertible. For example, try as we might, we will never find an inverse for the matrix A.
A=[402201235]
Why not? Every nonzero number has an inverse, but there is no number that will multiply with 0 to give 1. Many matrices act like 0 does on the number line. We can even think of numbers as 1×1 matrices.
Specifically, when a matrix acts as a transformation, we can talk about it stretching things out or squeezing them in. Imagine that we applied a linear transformation to a cube, for example. It'll get warped somehow into a rectangular prism, and its volume will perhaps be larger than at the start.
This expansion and contraction is so important that we have a special function that tells us how much a certain matrix stretches or squeezes space. We call this function the determinant.
When we consider what properties the inverse of a matrix must have, a big one is that it should exactly reverse the stretching or squeezing the original matrix caused. This is possible for almost all matrices, but a select few shrink space infinitely.
Imagine the cube from before not just getting warped, but getting completely flattened into the xy-plane. We have lost information that was essential to that cube, so we cannot build rebuild it exactly.
Another way to say this is that matrices with determinant zero are not invertible.
We will need to compute determinants in multivariable calculus, so it's worthwhile to know the formula. There is a good explanation from linear algebra for why the determinant is calculated as it is. Until then, try to always think of determinants in terms of how they stretch and squeeze space.
Learn about calculating determinants here.

Onward!

Now you know the linear algebra that is essential to fruitfully understanding multivariable calculus. Whenever a problem involving one of these topics is difficult, always come back to what the object represents.
Vectors represent directions in space. Dot products are a measure of whether two vectors are pointing in the same direction. Cross products give perpendicular vectors. Matrices represent linear transformations of space. Inverse matrices are the opposite transformation of a certain matrix. Determinants give the scaling factor of a matrix.

Want to join the conversation?

No posts yet.