Main content
Current time:0:00Total duration:8:04

Video transcript

- [Voiceover] When you have a multivariable function, something that takes in multiple different input values and let's say it's just outputting a single number, a very common thing you wanna do with an animal like this is Maximize it. Maximize it, and what this means is you're looking for the input points, the values of x and y and all of its other inputs, such that the output, f, is as great as it possibly can be. Now this actually comes up all the time in practice 'cause usually when you're dealing with a multivariable function, it's not just for fun and for dealing with abstract symbols, it's 'cause it actually represents something, so maybe it represents profits of a company, maybe this is a function where you're considering all the choices you can make, like the wages you give your employees or the prices of your goods, or the amount of debt that you raise for capital, all sorts of choices that you might make, and you wanna know what values should you give to those choices such that you maximize profits, you maximize the thing, and if you have a function that models these relationships, there are techniques, which I'm about to teach you, that you can use to maximize this. Another very common setting, more and more important these days, is that of machine learning and artificial intelligence, where often what you do is you assign something called a cost function to a task, so maybe you're trying to teach a computer how to understand audio or how to read handwritten text. What you do, is you find a function that basically tells it how wrong it is when it makes a guess, and if you do a good job designing that function, you just need to tell the computer to Minimize, so that's kind of the flip side, right? Instead of finding the maximum, to minimize a certain function, and if it minimizes this cost function, that means that it's doing a really good job at whatever task you've assigned it, so a lot of the art and science of machine learning and artificial intelligence comes down to, well, one finding this cost function and actually describing difficult tasks in terms of a function, but then applying the techniques that I'm about to teach you to have the computer minimize that, and a lot of time and research has gone into figuring out ways to basically apply these techniques, but really quickly and efficiently. So, first of all, on a conceptual level, let's just think about what it means to be finding the maximum of a multivariable function. So I have here the graph of a two-variable function. It's something that has a two-variable input that we're thinking of as the xy-plane, and then its output is the height of this graph, and if you're looking to maximize it, basically, what you're finding is this peak, kind of the tallest mountain in the entire area, and you're looking for the input value, the point on the xy-plane directly below that peak, 'cause that tells you the values of the inputs that you should put in to maximize your function, so how do you go about finding that? Well, this is perhaps the core observation in well, calculus, not just multivariable calculus. This is similar in the single variable world, and there are similarities in other settings, but the core observation is that if you take a tangent plane at that peak, so let's just draw in a tangent plane at that peak, it's gonna be completely flat, but let's say you did this at a different point, right? 'Cause if you tried to find the tangent plane, not at that point, but you kind of moved it about a bit to somewhere that's not quite a maximum, if the tangent plane has any kind of slope to it, what that's telling you is that if you take very small directions, kind of in the direction of that upward slope, you can increase the value of your function, so if there's any slope to the tangent plane, you know that you can walk in some direction to increase it, but if there's no slope to it, if it's flat, then that's a sign that no matter which direction you walk, you're not gonna be significantly increasing the value of your function. So what does this mean in terms of formulas? Well, if you kind of think back to how we compute tangent planes and if you're not very comfortable with that, now would be a good time to take another look at those videos about tangent planes, the slope of the plane in each direction, so this would be the slope in the x direction, and then if you look at it from another perspective, this would be the slope in the y direction, each one of those has to be zero, and that, in terms of partial derivatives, means the partial derivative of your function, at whatever point you're dealing with, right? So I'll call it x not, y not, as the point where you're inputting this, has to be zero, and then similarly, the partial derivative with respect to the other variable, with respect to y, at that same point, has to be zero, and both of these have to be true because let's just take a look, I don't know, let's slide it over a little bit here, this tangent plane, if you look at the slope, you imagine walking in the y direction, you're not increasing your value at all. The slope in the y direction would actually be zero, so that would mean the partial derivative with respect to y would be zero, but with respect to x, when you're moving in the x direction here, the slope is clearly negative, because as you take positive steps in the x direction, the height of your tangent plane is decreasing, which corresponds to if you take tiny steps on your graph, then the height will decrease in a manner proportional to the size of those tiny steps. So what this gives you here is gonna be a system of equations where you're solving for the value of x not and y not that satisfies both of these equations, and in future videos, I'll go through specific examples of this. For now, I just wanna give a good conceptual understanding, but one very important thing to notice is that just because this condition is satisfied, meaning your tangent plane is flat, just because that's satisfied, doesn't necessarily mean that you've found the maximum. That's just one requirement that it has to satisfy, but for one thing, if you found the tangent plane at other little peaks, like this guy here or this guy here, or all of the little bumps that go up, those tangent planes would also be flat, and those little bumps actually have a name because this comes up a lot. They're called local Minima, or local Maxima, sorry, so those guys are called local Maxima. Maxima is just the plural of Maximum, and local means that it's relative to a single point, so it's basically, if you walk in any direction, when you're on that little peak, you'll go downhill, so relative to the neighbors of that little point, it is a maximum, but relative to the entire function, these guys are the shorter mountains next to Mount Everest, but there's also another circumstance where you might find a flat tangent plane, and that's at the Minima points, right? If you have the global Minimum, the absolute smallest, or also just the local Minima, these inverted peaks, you'll also find flat tangent planes. So what that means, first of all, is that when you're minimizing a function, you also have to look for this requirement, where all the partial derivatives are zero, but it mainly just means that your job isn't done once you've done this. You have to do more tests to check whether or not what you found is a local Maximum or a local Minimum, or a global Maximum, and these requirements, by the way, often you'll see them written in a more succinct form, where instead of saying all the partial derivatives have to be zero, which is what you need to find, they'll write it in a different form where you say that the gradient of your function, f, which, of course, is just the vector that contains all those partial derivatives. Its first component is the partial derivative with respect to the first variable, its second component is the partial derivative with respect to the second variable, and if there's more variables, you would keep going, you'd say that this whole thing has to equal the zero vector, the vector that has nothing but zeroes as its components, and it's kind of a common, abusive notation. People will just call that zero vector, zero, and maybe they'll emphasize it by making it bold, because the number zero is not a vector and often making things bold emphasizes that you want to be referring to a vector, but this gives a very succinct way of describing the requirement. You're just looking for where the gradient of your function is equal to the zero vector, and that way, you can just write it on one line, but in practice, every time that you're expanding that out, what that means is you find all of the different partial derivatives, so this is really just a matter of notational convenience and using less space on a blackboard, but whenever you see this, that the gradient equals zero, what you should be thinking of is the idea that the tangent plane, the tangent plane is completely flat, and as I just said, that's not enough because you might also be finding local Maxima or Minima points, but in multivariable calculus, there's also another possibility, a place where the tangent plane is flat, but what you're looking at is neither a local Maximum nor a local Minimum, and this is the idea of a saddle point, which is new to multivariable calculus, and that's what I'll be talking about in the next video, so I will see you then.