If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Coordinates with respect to orthonormal bases

Seeing that orthonormal bases make for good coordinate systems. Created by Sal Khan.

Want to join the conversation?

  • blobby green style avatar for user ivan
    Hi,

    what does he mean by 'B coordinate representation of x '? I do not understand what he means by 'coordinate' here. Does he meant the projection of x on the span of V? thanks.
    (2 votes)
    Default Khan Academy avatar avatar for user
    • mr pants teal style avatar for user Robert
      Here's a link to a video on the topic which begins a 7 part series of videos explaining it in detail.
      https://www.khanacademy.org/math/linear-algebra/alternate_bases/change_of_basis/v/linear-algebra-coordinates-with-respect-to-a-basis
      Also, if you like, I have summarized a bit of it here:
      The standard basis is {e_1,e_2,...,e_n} for R^n.
      If we have a k-dimensional subspace V of R^n, say it has a basis B = {v_1,v_2,...,v_k}, where the v_i's are in R^n.
      We could write standard coordinates for a vector x in V as
      x = <x_1,x_2,...,x_n> = x_1*e_1 + x_2*e_2 + ... + x_n*e_n.
      But we could also represent x as a linear combination of the v_i's. We write this representation of x as [x]_B, and this is the B coordinate representation of the vector x.
      If x = c_1*v_1 + c_2*v_2 + ... + c_k*v_k, then we write
      [x]_B = <c_1,c_2,...c_k>_B. We call these c_i's the B coordinates of x.
      Hope that summary helps, but I do suggest watching the videos linked above. They contain some minor mistakes, but if you read the comments and push through them you should be better set up for what continues here.
      (6 votes)
  • leaf blue style avatar for user William Brandon
    Is it a side effect of the properties discussed in this video that A^-1 = A^t if A's columns form an orthonormal set?
    (3 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user qazi.zain1
    Isn't there something wrong with Mr. Khan's transformation? Here's my concern:
    I know the coordinates in terms of basis e1, e2, e3 (A = 9, -2). I have another orthonormal basis f1, f2, f3. If I want to know the coordinates in terms of basis f1, f2, f3, wouldn't I do P*A, where P is a rotation matrix defined by: (row 1: e1.f1 e2.f1) (row 2: e1.f2 e2.f2) It seem the method Mr. Khan is using uses a transform/ inverse of this matrix, which is confusing me (. represents dot product)

    Source: http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-07-dynamics-fall-2009/lecture-notes/MIT16_07F09_Lec03.pdf
    (1 vote)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user grzegorz
      Everything is ok, you mixed A and C matrices (transformation and basis change).

      In this video: x_b = C^(-1)x, where C^(-1) = transpose of C (in orthonormal case)
      C - change of basis matrix, where vectors of basis B are columns in this matrix, so: Cx_b=x

      When you are talking about rotation, you mean transformation matrix A.
      Relation C and A: A=CDC^(-1), where D is transformation matrix for T with respect do basis B.

      When you transform (rotate, scale, shift) a point, you don't change it's basis.
      In other words: basis changing doesn't move a point, it makes different reference system, transformation moves a point.
      (3 votes)
  • starky tree style avatar for user Justin Lane
    How could you describe the takeaway here in general terms?

    I came away with: "It is much easier to change bases this way. Just dot your vectors of an orthonormal set with a member of R2", but I don't know in what context you would have the vector in R2. Does this question make sense?

    We used vector x=(9, -2) in the video, but where would we find this vector otherwise? If we can pick any random vector for x, what can be said of the resulting matrix [x]B?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user A A
    At he says the Change of Basis Matrix "C" isn't always going to be invertible or square. I thought that, by definition, a Change of Basis Matrix is invertible and square. What am I misunderstanding?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Nicholas.BA.Cousar
    Suppose you had a Basis B that was linearly independent but it's change of basis matrix C was not invertible, and let's also suppose that x is a member of the subspace that B spans. Couldn't you solve for [x]_sub_B in (C[x]_sub_B = x) by multiplying both sides of the equation by (C Transpose) and the inverse of ( (CTranspose) C). This would make your matrix on the left side of the equation invertible. So our solution should be
    [x]_sub_B = [ (C Transpose) C]^(-1) (C Transpose) x.

    A basis for a subspace is always linearly independent so if the product of the transpose of the change of basis matrix and the regular change of basis matrix: (C Transpose) C will always be a square matrix composed of linearly independent columns. Right?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Dopi Dipra
    How to calculate the coordinates on a point
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user youknowwho202
    Does Salman ever explain Parserval Thereom?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user SeungJun Lee
    what does [x] means ?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user William Mahajan
    I think u made a mistake at 13.40, it (-4/5)*(-2) and (3/5)*(-2), or am i wrong
    (1 vote)
    Default Khan Academy avatar avatar for user

Video transcript

We know what an orthonormal basis is, but the next obvious question is, what are they good for? And one of the many answers to that question is that they make for good coordinate systems or good coordinate bases. For example, the standard basis, or the standard coordinates-- Let me write the standard basis in Rn. So if we're dealing with Rn-- So the standard basis for Rn is equal to-- Well, I could write it as e1, e2, and all of that, but I'll actually write out the vectors. You know, e1 is just 1 with a bunch of 0's all the way. And this is going to be n 0's right there. e2 is going to be 0, 1 with a bunch of 0's all the way. And then you're going to go all the way to en, which is going to have a bunch of 0's. And then you're going to have a 1. The standard basis that we've been dealing with throughout this playlist is an orthonormal set, is an orthonormal basis. Clearly the length of any of these guys is 1. If you were to take this guy dotted with yourself, you're going to get 1 times 1, plus a bunch of 0's times each other. So it's going to be one squared. It's going to be 1. And that's true of any of these guys. And clearly they're orthogonal. You take the dot product of any of these guys with any of the other one's, you're going to get a 1 times 0, and a 1 times 0, and then a bunch of 0's. You're going to get 0's. So they clearly each have lengths of 1. And they're all orthogonal. And clearly, this is a good coordinate system. But what about other orthonormal bases. Obviously this is one specific example I need to show you that all orthonormal bases make for good coordinate systems. So let's say I have some set, some orthonormal set of vectors. So this is v1, v2, all the way to vk. And it is an orthonormal basis for some subspace V. And this is a k-dimensional subspace, because you have k basis vectors in your basis, or you have k vectors in your basis. Now let's experiment with this a little bit. I'm claiming that the coordinate system, with respect to this, is good. But what does it mean to be good? I mean, the standard basis is good, but, you know, that's just because we use it and it seems to be easy to deal with. But let's see, when I say good in this context, what do I mean? So let's experiment. If I say that some vector x is a member of V, that means that x can be represented as a linear combination of these characters up here. So x can be represented as some constant times v1, plus some constant times v2, plus, you know, the ith constant times vi, all the way, if you just keep going, all the way to the kth constant times vk. That's what being a member of the subspace means. The subspace is spanned by these guys, so this guy can be represented as a linear combination of those guys. Now what happens if we take the dot product of both sides of this equation with vi? So I'm going to take vi, I'm going to dot both of these sides with vi. So I get vi, dotted with x, is going to be equal to what? Well it's going to be-- we could just put the constants out --it's going to be c1 times vi, dot v1, plus c2 times vi dot v2, plus all the way to ci, times vi dot vi, and then you keep going, plus all the way to ck, times vi dot vk. Now, this is an orthonormal set. That means, if I take two vectors that are different than each other in our basis right here, that if you take their dot product, you're going to get 0. They're orthogonal to each other. So these are two different vectors in our set. They're going to be orthogonal, so this term is going to be 0. It's going to be 0 times c1, so it's going to be 0. This term is also going to be 0, assuming that i isn't 2. Let's just assume that. This term over here, let's assume that i isn't k. It's also going to be equal to 0. So all of the terms are going to be 0, except for the case where v sub i is equal to, well, in this case, v sub i. Except for the case where this subscript is equal to that subscript. And then what is v sub i, dot v sub i? You know, orthonormal has two parts. They're orthogonal to each other, and they're each normalized, or they each have length 1. So v sub i, dot v sub i, dot with v sub i is going to be equal to 1. So this whole equation has simplified to v sub i-- which is one of these guys, it's the ith member of our basis set --dot x-- where x is just any member of the subspace --is equal to the only thing that's left over is 1 times ci. So it's just equal to ci. Now why is this useful? You know, we were just experimenting around, and we got this nice little result here. Why is this useful in terms of having a coordinate system with respect to this basis? So let's remind ourselves what a coordinate system is here. So if we wanted to represent the vector x, which is a member of our subspace, with coordinates that are with respect to this basis of the subspace-- Right, a subspace can have many bases, but this is the basis that we're choosing. So we want to write x with respect to the basis B. What do we do? The coordinates are just going to be, the coefficients on the different basis vectors. This is all a bit of review. It's going to be c1, c2, we're going to go down to ci, and then you're going all the way to ck. You're going to have k terms, because this is a k-dimensional subspace. Now normally this is not such an easy thing to figure out. If I give you some vector x-- I mean, we've seen it before. Well, if you have x represented in B coordinate system, then you can multiply it times the change of basis matrix, and you can just get regular x. But if have regular x, and you need to find this, one, if C is invertible, then you can apply this equation right here, which isn't always the case. This is only if C is invertible. And, first of all, C will not always be invertible. If this isn't a square matrix, then this isn't going to apply. So this is one way that if I give you your x, to get your B representation of x. But if C isn't invertible, then you're just going to have to solve this equation. You're going to have something on the right hand side here. You're going to have a change of basis matrix. And then you're going to have to solve that equation. You know, for an arbitrary basis, that can be pretty painful. But what do we have here? We have a very simple solution for finding the different coordinates of x. So this is the same thing as being equal to-- c1 is just going to be equal to my first basis vector, dotted with x. We say ci is just the ith basis vector dotted with x. So c1 is going to be the first basis vector dotted with x. c2 is going to be my second basis vector dotted with x. And you're going to go all the way down to ck is going to be my kth basis vector dotted with x. And let me show you that this is actually easier. So let's do a concrete example. I want to leave this result up here. Let's say that I have two vectors. Let's say that v1 is the vector 3/5. Let me write it this way. Let's say it's 3/5 and 4/5. And that v2 is equal to minus 4/5, 3/5. And let's say that the set B is equal to-- It's comprised of just those two vectors, v1 and v2. Now, I'm claiming, or I'm about to claim, that this an orthonormal set. Let's just prove it to ourselves. So what is the length of v1 squared? Well, that's just v1 dotted with itself. So that's 3/5 squared, which is 9/25, plus 4/5 squared, which is 16/25, which is equal to 25/25, which is equal to 1. So this guy definitely has length 1. What is the length of v2 squared? Well, it's going to be this guy squared. Negative 4/5 squared is 9/25, plus 3-- Sorry, minus 4/5 squared is plus 16/25. And then 3/5 squared is 9/25. And, once again, the length squared is going to be 1, or the length is going to be 1. So both of these guys definitely have length 1. And now we just have to verify that they're orthogonal with respect to each other. So what is v1 dot v2? It's going to be 3/5 times minus 4/5. So it's going to be minus 12/25, plus 4/5 times 3/5, which is going to be plus 12/25, which is equal to 0. So these guys are definitely orthogonal with respect to each other, and their lengths are 1, so this is definitely an orthonormal set. And so that also tells us that they're linearly independent. So let's say that my set B is the basis for some subspace V. And actually, it's not a-- We don't even have to say that-- it's the basis for R2. It's a basis for R2. And how do we know it's a basis for R2? I have two linearly independent vectors in my basis, and it's spanning a two-dimensional space, R2, so this can be a basis for all of R2. Now, given what we've seen already, let's pick some random member of R2. So if we pick some random member of R2, let's say that x is equal to-- I don't know, I'm just going to pick some random numbers --9 and minus 2. If we didn't know this was an orthonormal basis and we wanted to figure out x in B's coordinates, what we would have to do is we would have to create the change of basis matrix. So the change of basis matrix would be 3-- let me write it like it would be-- 3/5, 4/5, minus 4/5, and then 3/5. And we would say that times my B coordinate representation of x is going to be equal to my regular representation of x, or my standard coordinates of x. And I would have to solve this 2 by 2 system, and in a 2 by 2 case it's not so bad. But we have this neat tool here for orthonormal sets, or orthonormal bases. So, instead of solving this equation, we can just say that x represented in B coordinates is going to be equal to-- let me scroll down a little bit-- it's going to be equal to v1, which is this guy right here, dotted with x. So it's going to be v1 dot x. And then this guy right here is just going to be v2 dot x. And I can do this because this is an orthonormal basis. And what is the equal to? x is 9 minus 2. If I dot that with v1, I get 9 times 3/5, which is 27/5, right. 9 times 3 is 27/5, plus minus 2 times 4/5, so that's minus 8/5, right. Minus 2 times 4/5 is minus 8/5. And then the second term is v2 dot x. So v2 dot x. I get 9 times-- let me scroll up a little bit-- 9 times minus 4/5, that's minus 36/5, plus minus 2 times 3/5, so that's plus minus 2 times 3/5 is minus 6/5. So the B coordinate representation of x, just being able to use this property right here of orthonormal bases, is equal to-- What is this? 27 minus 8 is 19/5, and then minus 36, minus 6 is minus 42/5. Not a pretty answer but, you know, we would have had this ugly answer either way we solved it. But hopefully you see that when we have an orthonormal basis, solving for the coordinates with respect to that basis becomes a lot easier. This is just an example in R2. You can imagine how difficult it could be if you start dealing with, you know, if you start dealing with R4 or R100. Then all of a sudden solving these systems isn't so easy, but taking dot products are always fairly straightforward. So earlier in this video when I said, orthonormal basis, you know, what are they good for? And I said, you know, standard basis is good. That these are good coordinates systems. You've used it before. You know, I didn't really put a lot of context around what it meant to be good. But now we see one version of what it's good for. It's very easy to find coordinates in an orthonormal basis, or coordinates with respect to an orthonormal basis.