If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content
Current time:0:00Total duration:22:08

Video transcript

say I've got a subspace V so V is some subspace some subspace some subspace maybe of r-n I'm going to define the orthogonal complement of V let me write that down the orthogonal orthogonal complement complement of V is the set and this is a shorthand notation right here would be the orthogonal complement of V so we write this little orthogonal notation as a superscript on V and you can pronounce this is V perp not for a purpose not for perpetrator but or perpendicular so V perp the perp is equal to is equal to the set of all X's all X all the vectors X that are a member of our RN that are member of our RN such that X dot V X dot V is equal to 0 for every or every vector V that is a member of our subspace so we're essentially saying look you have some subspace it's got a bunch of vectors in it now if I can find some other set of vectors where every member of that set is orthogonal to every member of U the subspace in question then the set of those vectors is called the orthogonal complement of V the orthogonal complement of V and you write it this way V perp right there so the first thing that we just tend to do and we're you know defining a space or defining some set is to see hey is this a subspace is V perp or the orthogonal complement of V is this a subspace subspace well you might remember from many many videos ago that we had just a couple of conditions for a subspace that if if let's say that a and B are both a member of V perp then we have to wonder whether a plus B is a member of V perp that's our first condition it needs to be it needs to be closed under addition in order for this to be a subspace and then the next condition is well if a if a is a member of V perp is some scalar multiple of a also a member also a member of V perp and then the last one it has to contain the zero vector which is a little bit redundant with this because if any scalar multiple of a is a member of V of our orthogonal complement of V they could just multiply it by zero so it would imply that the zero vector is a member of V so what is what is this imply what is the fact that a and B are members of V perp that means that that means that a dot V where this V is any member of our original subspace V is equal to zero for any for any V that is a member of our subspace V and it also means that B since V is also a member of V perp that V dot any member of our subspace is also going to be zero for any V that is a member of V so what happens what happens if we take a plus B dot V let's do that so if I do a plus B dot V what is this going to be equal to this is going to be equal to a dot V plus B dot V and we just said the fact that both a and B are members of our orthogonal complement means that both of these quantities are going to be equal to zero so this is going to be equal to zero plus zero which is equal to zero so a plus B is definitely a member of our thought candle complement so we got our check box right there doing a different color than the question mark check for the first condition for being a subspace now is ca a member of V perp well let's just take C if we take C a and dot it with any member of our original subspace and even original subspace this is the same thing as C times a dot V and what is this equal to by definition a was a member of our orthogonal complement so this is going to be equal to zero so it's going to be C times zero which is equal to zero so this is also a member of our orthogonal complement to V and of course I could multiply C times zero and I would get to zero or you could just say look zero is orthogonal to everything you take the zero vector and dot it with anything you can get zero so the zero vector is always going to be a member of any orthogonal complement because it obviously is always going to be true for this condition right here so we know we know that V perp or the orthogonal complement of V is is a subspace is a subspace which is nice because now we can apply to it all of the properties that we know of subspaces now the next question and I touched on this in the last video I said I kind of just touched on the idea I said that though if I have some matrix a if I have some matrix a and you know let's just say it's an M by n matrix in the last video I said that the row space of a the row space of a is well let me write it this way I wrote that the null space of a switch the order the null space of a the null space of a is equal to the orthogonal complement is the orthogonal orthogonal orthogonal complement complement of the row space of the row space row space of a and in the way that we can write the row space of a this thing right here the row space of a is the same thing as the column space of a transpose so one way you could write rewrite this sentence right here is that the null space of a the null space of a is the orthogonal complement of the row space the row space is the column space of the transpose matrix and the claim which I have not proven to you is that this is the orthogonal complement of this so this is equal to that the little perp the little perpendicular superscript that's the claim and at least in the particular example that I did in the last two videos this was the case where I actually showed you that 2 by 3 matrix but let's see if this applies generally if this applies generally so let's let me write my matrix a like this so my matrix a I can write it as just a bunch of row vectors but just to be consistent with our notation with our notation what vectors we tend to associate as column vectors so that so to represent the row vector so I'm going to write them as transpose vectors because in our reality vectors will always be column vectors and row vectors are just transposes those are 1 transpose our to transpose and you go all the way down we have M rows so you're going to get RM RM transpose don't let the transpose part confuse you I'm just saying that these are row vectors these are row vectors I'm writing transposes there just to say that look these are the transposes of column vectors that represent these rows but all I'm but if it's helpful for you to imagine I'm just imagine this is the first row of that guy of the matrix this is the second row of that matrix so on and so forth now what is the null space of a well that's all of the vectors here let me do it like this the null space of a is let me write it like this I love it's all of the vectors X that satisfy the equation this is going to be equal to the zero vector now to solve this equation what can we do we've seen this multiple times this matrix vector product is essentially the same thing as saying let me actually let me write it like this a little bit let me write it like this we're it's going to be equal to 0 vector in RM you're going to have a m 0 all the way down to the M zero so another way to write this equation you've seen it before is this you when you take the matrix vector product you're essentially taking the dot product so to get to this entry right here this entry right here is going to be this row dotted with my vector X so this is our one we're calling this row vector R 1 transpose as the transpose of some column vector that can represent that row but that dot dot my vector X not my vector X this vector X is going to be equal to that 0 now if I take this guy let me do it in a different color if I take this guy and I dot him with vector X is going to be equal to that 0 so R 2 transpose dot X dot X is going to be equal to that 0 right there so by definition or I guess let me write it this way another way to write this equation another way to write this equation is that is that R 1 dot X right this way R 1 transpose X is equal to 0 R 2 transpose X is equal to 0 all the way down to RN transpose X is equal to 0 and by definition the null space of a the null space of a is equal to all of the X's all of the X's that are members of well in this case it's an M by n matrix you're going to have n columns so it's all the X's that are members of our n such that ax is equal to 0 or you can alternately write it this way that you if you were to dot each of the rows with X you're going to be equal to 0 so you could write it this way you could well let me just write such that a X is equal to 0 that's an easier way to write it so let's think about it so if someone is a member if by definition I give you some vector V if I say that let me switch colors if I tell you if I were to tell you that V is a member is a member of the null space of a I just tell you let's call it V 1 V 1 is a member of null space of a that means it satisfies this equation right here that means a times V is equal to 0 a times V is equal to 0 means that when you dot each of these rows with V you get equal to 0 or another way to saying that is that V 1 is orthogonal orthogonal to all of these rows to r1 transpose that's just the first row R 2 transpose all the way to R M all the way to RM transpose so this is orthogonal to all of these guys by definition any member of the null space well if you're thuggin 'el to all of these members of God all of these rows in your matrix you're also orthogonal to any linear combination of them right you're also orthogonal to any linear combination orthogonal to any linear combination combination I mean you could imagine let's say that we have some vector that is a linear combination of these guys right here so let's say vector let me write it let me just write it well let me it's going to be a member of the r space let me think of a good now I want to do a different let's say a vector let's say vector W vector W is equal to some linear combination of these vectors right here I wrote them as transpose it's just because they're row vectors but I can just write them as regular column vectors just to show that W is just could be just a regular column vector so let's say W is equal to c1 times r1 plus c2 times R 2 all the way to C M times R M that's what W is equal to so what happens when you take V which is a member our null space and u dotted with W so if you take if we were to take V and dotted with W it's going to be V dotted with each of these guys because our dot product has the distributive property so if u dot V with each of these guys it's going to be equal to c1 I'm just going to take the scalar out c1 times V dot R one plus c2 times V V dot R two this is an R right here not a V plus all the way to plus cm times V dot R M and we know we already just said that V dot L each of these RS are going to be equal to zero so all of these are going to be equal to zero so this whole expression is going to be equal to zero so if you have any vector that's a linear combination of these row vectors if you dotted with any member of your null space you're going to get zero so this so let me write it this way what is any vector that's any linear combination of these guys well that's the span of these guys or you could say that the row space that's the row space so if you take if W is a member of the row space which you can just represent as a column space of a transpose then we know we know and and let's say we know that V V is a member of our null space we know that V dot W is going to be equal to zero I just showed that to you right there so every member of our null space is definitely orthogonal to every member of our row space so that's what our we know so far every member every member of null space of a is orthogonal agonal to every member member of the row space is to the row space of a now that only gives us half way that still doesn't tell us that that this is equivalent to the orthogonal complement of the null-space for example there might be members of our orthogonal complement of the row space that that aren't in that aren't a member of our null space so in order to let's let's say that I have some other vector that is let's say I have some vector let me say I have some vector u let's say that U is a member is a member of the is the orthogonal complement of our row space I know the notation is a little convoluted maybe I should write an R there but I wanted to keep I want to really get set in your mind that the row space is just the column space of the transpose so let's say that U is some member of our orthogonal complement what I want to do is show you that u has to be in your null space so that when I show you that then we know so far we just said that ok then everything in the null space is orthogonal to the row space but we don't know that everything that's orthogonal to the row space which is represented by this set is also going to be in your null space that's what we have to show in order for those two sets to be equivalent in order for the null space to be equal to this so if we know this is true then this means this means that u dot let's say u dot W where W is a member of our row space as a member of our row space is going to be equal to 0 it's going to be equal to 0 let me write this down right here that is going to be equal to 0 and what does that mean that means that means means that u that U is also orthogonal so this implies that u dot well you know RJ any of the trends any of the row vectors any of the row vectors is also equal to 0 you know where where J is equal to 1 through all the way through M how do I know that well I'm saying that look you take you as a member of the orthogonal complement of the row space so that means you is is orthogonal to any member of your row space so in particular the basis vectors of your row space we don't know whether all of these guys are basis vectors but that means we can take these guys are definitely all members of the row space some of them are actually the basis for the row space so that means if you take u dot in any of these guys is going to be equal to 0 so if you dot any of these guys is equal to 0 that means that means that u dot R 1 is 0 let me write this down u dot R 1 is equal to 0 u dot R 2 is equal to 0 all the way to u u dot R M is equal to 0 well if all of this is true that means that means that a times the vector U is equal to 0 that implies this right you stick you there's you take all the dot products is going to satisfy this equation which implies that U is a member of our null space so we've just shown you that every member of your null space is definitely a member of the orthogonal complement and now we said that every member of our thuggin complement is a member of our null space is a member of our null space and actually I just notice that I made a slight error here this dot product I don't have to write the transpose here because we've written we've defined our dot product as the dot product of column vector so this is the transpose of some column vector so you can transpose you can untransparent just take the dot product but anyway minor error there but that diverts me from my main takeaway my punchline the big picture we now showed you any member of any member of our null space is a member of the orthogonal complement so we just showed you this first statement here is another way of saying any not member of the null space or that the null space is a subset of the orthogonal complement of the row space so that's our row space and that's the orthogonal complement of our row space and here we just showed we just showed that any member of our any member of our of the orthogonal complement of our row space is also a member of your null space well these two guys are subsets of each other they must be equal to each other so we now know that the null space of a the null space of a is equal to the orthogonal complement of the row space of a or the column space of a transpose now this is I related the null space with the row space now I could just as easily make a bit of a substitution here let's say that let's say that a is equal to let's say a is equal to some other matrix B transpose right it's going to be the transpose of some matrix you could transpose either way so if I just make that substitution here what do we get we get the null space of B transpose is equal to the column space of let me write it out B transpose transpose right a transpose is B transpose transposed we get my parentheses right and then that thing's orthogonal complement so what is this equal to this the transpose of the transpose is just equal to B so I could write it as the null space of B transpose is equal to the orthogonal complement of the column space of B Y orthogonal complement of the column space of B so just like this we just showed that the left you know B and a they're just arbitrary matrices so this showed us that the null space so this is null space sometimes it's nice try in words is orthogonal orthogonal complement complement of row space of row space and this right here is showing us this right here showing us that the left null space the left null space which is just the same thing as the null space of a transpose matrix is equal to is orthogonal I'll just shorthand it complement complement of the column space column space which are pretty - pretty neat takeaways we saw a particular example of it a couple of videos ago and now you see that it's true for all matrices