If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Projections onto subspaces with orthonormal bases

Projections onto subspaces with orthonormal bases. Created by Sal Khan.

## Want to join the conversation?

• Isn't A^T * A = I as well? So why doesn't projection of x onto v = x?
• A^T * A is I, but A * A^T is not necessarily. This is because we are no longer dotting the column vectors, which happen to be orthonormal, with themselves. We are instead dotting the rows which themselves. The row vectors, unlike the column vectors might not be orthonormal.
• This and the previous video begin telling reasons why orthonormal bases make things easier to work with.

Perhaps this was mentioned or implied but if the standard-basis is orthonormal, then do the benefits outlined in this and the previous video apply to all problems where we are using the standard basis?

For example, could we just use Projv(x) = AA^Tx when we we are working with standard basis?
• what happens if the Gram - Schmidt procedure is applied to a list of vectors that is not linearly independent
• In this video, Sal refers to A = {v1,....vk} as the matrix with the column space composed of the orthonormal elements of the basis vector V for R^n. Based on this assumption, Sal is able to arrive at the conclusion at the end that the projection of a vector x which is an element of R^n onto V can be determined using the equation AA^Tx. But under the same assumption, shouldn't AA^T evaluate to the identity matrix Ik, in which case the result obtained is projection of x onto V is equal to x itself! Is there a basic flaw to the argument that I have presented here?
• did u just say a matrix multiplied by its transpose is a scalar multiplied by the identity matrix? notation confusing as hell
(1 vote)
• At the w subspace of orthogonal compliment. The upside down T represents orthogonal compliment?
(1 vote)
• Yes, that upside down T does indicate the orthgonal complement of a subspace. So if V is a subspace, V ^ _|_ would be the orthogonal complement of V, and we sometimes read it as "V perp", where the 'perp' is short for "perpendicular."
• I have an observational question.
Sal writes the proj_Vx = (v_1 dot x)v_1 +....etc etc
My question is. From previous video, we said that (v_1 dot x) entries are just the entries of the vector x with respect to B where v_1, v_2...v_k were the basis vectors of B. Does this mean, when we take the proj_Vx = (v_1 dot x)v_1 +....etc etc, is it the same thing as taking [x]_B (x w/ respect to B) times all the basis vectors of the subspace V (V from this video). such that

Proj_Vx= ([x]_B*(v_1, v_2, ...v_k) = projection of x onto the subspace V is equal to x written with respect to the basis of our subsapce V times the basis vectors of our subspace V)

I hope it wasn't too convoluted.
(1 vote)
• Unfortunately since x is outside of the subspace V, it cannot be represented solely in terms of coordinates with respect to the basis vectors of V. In other words, you cannot rewrite x as [x]_B.

You can, however, write the projection of x onto V in V's coordinate system, since the projection lies in the subspace V. What you're calling [x]_B would be this projection written in basis B. You could then of course convert this projection into standard basis by multiplying B times this "[x]_B".
• I would love to see how to turn an equation of a plane into an orthonormal basis.
(1 vote)
• An orthonormal basis is a just column space of vectors that are orthogonal and normalized (length equaling 1), and an equation of a plane in R3 ax + by + cz = d gives you all the information you need for an orthonormal basis. In this case, dealing with a plane in R3, all you need are two orthogonal vectors. It doesn't matter what vectors they are, as long as v1 and v2 are both orthogonal to each other and lie on the same plane. Pick any random x and y and solve for z in the equation for a plane two times with different numbers and you get two points on the plane. The difference between them is your first vector in your orthonormal basis, v1. To find the second is a bit more tricky, but you are already given the normal vector to the plane <a, b, c>, and v2, by definition, is orthogonal to both v1 and n (where n is the normal vector). So, to find v2, all you have to do is take the cross product v1 and n. v2 = v1 * n. Now all you have to do is normalize v1 and v2 and you have your orthonormal basis! V = <v1/||v1||, (v1*n)/||(v1*n)||>
• Just would like to clarify one more point. If A = {v1, v2,....vk}, shouldn't A^T be equal to {v1/r/nv2/r/n...vk}? I see that in this video Sal uses the notation A^T = {v1^T/r/nv2^T/r/n...vk} instead.
(1 vote)
• cant even tell what you're talking about with ur notation
(1 vote)
• if A is an orthonormal basis of a subspace, then the projection of a vector x unto that subspace becomes [AA^T]x.
therefore [AA^T ]x = sum{ (x.u_i)u_i } where i = 1,2,3...
is this correct?
(1 vote)
• wait so at A*A^t isnt the identity even if A^t*A is
(1 vote)