If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Transpose of a vector

Transpose of a column vector. Matrix-matrix products using vectors. Created by Sal Khan.

Want to join the conversation?

  • blobby green style avatar for user Kyle Delaney
    I thought a vector was just a list of numbers that could freely be turned on its side without changing anything. I thought it was neither a row nor a column and so it doesn't matter whether you write it horizontally or vertically. I thought any vector could be thought of as either a row vector or a column vector. Was I wrong all along?
    (5 votes)
    Default Khan Academy avatar avatar for user
  • male robot donald style avatar for user v.russinov
    at the end - shouldn't there be the transpose of x times (transpose of A times y)? Because earlier it is but at the the end - when Sal sums up what we have learned - it isn't the transpose and I got a little bit confused...
    (5 votes)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Paul Bondin
    Am I right in thinking that v dot w(T) - as opposed to v(T) dot w - would yield an n x n matrix?
    (3 votes)
    Default Khan Academy avatar avatar for user
  • leaf green style avatar for user SteveSargentJr
    At , Sal implies that the "(Ax)y = x(A^Ty)" formula he derives is a useful result that he'll come back to in future videos. Could somebody please elaborate this point for me? I understand the video but I'm having trouble seeing the "Bigger Picture", so to speak...
    (3 votes)
    Default Khan Academy avatar avatar for user
    • mr pants teal style avatar for user Robert
      Just to be clear these are dot products here in this identity, which to be clear I will rewrite here and onward as < A*x, y > = < x, At*y >, where < u, v > is the dot product of u and v, and At is the transpose of A.

      One way to view this is in terms of transformations between Rn and Rm. A is a linear transformation from Rn to Rm, and At is from Rm to Rn. < u, v > could be viewed as a way of measuring how "in line" the vectors u and v are with each other. So the LHS of the equality is a measure of how "in line" the image of x under A is with y, or how "in line" the product A*x is with y. And in like fashion the RHS is a measure of of how "in line" the image of y under At is with x, or how "in line" the product At*y is with x. That these "measures of alignment" are equal is the useful bit I imagine will be leveraged in the discussions to come.

      In summary, A: Rn -> Rm, At: Rm -> Rn, x is in Rn, y is in Rm. x |-> x' = A*x, which is in Rm. y |-> y' = At*y, which is in Rn. < x', y > = < x, y' >, and so whether we send x to y's home(Rm) via A, or send y to x's home(Rn) via At, we end up with two vectors, which are "equally aligned" in either case. Hope ths helps!
      (1 vote)
  • male robot donald style avatar for user kio
    Have to ask, how is [v1w1+v2w2+....vnwn] is a 1x1 matrix?
    Isn't it a 1xn matrix? shouldn't a 1x1 matrix be just one #?
    I'm quite confused.
    (1 vote)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user Gobot
      A 1 x 1 matrix is like [7] as you say, so [1+2+3] is also a 1 x 1 matrix, because it is [6] (i.e. one of the elements was written as a sum, but is still one element) and so is [v1w1+v2w2...] assuming the vs and ws are all just numbers.
      A 1 x n, where n is 3, might be [1 2 3]... but that is not [1+2+3] or [6].
      (4 votes)
  • blobby green style avatar for user maktab1999
    let a , b and c are vetors in n-dimensional vecor space the question is
    what is the value of (a-b)Tc. where T dentes to transpose of (a-b).
    thanks
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Mehrose Hingorja
    What is a difference between dot product and inner product?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user bakula.darko4
    The way Sal setup everything seems like a very special case of associativity where matrix A has to be mxn, x has to be nx1 and y has to be mx1. Otherwise matrix vector product would be undefined for associativity. So to sump up and my question is: The dimensions of the vectors and the matrix has to defined with specific dimensions, in order to use this property then?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user Andrea Menozzi
    v dot w = v^t dot w
    can we say that
    v dot w = v dot w^t ??
    (1 vote)
    Default Khan Academy avatar avatar for user
  • blobby green style avatar for user kevinkarnani
    Does the transpose of a vector affect taking cross products at all?
    (1 vote)
    Default Khan Academy avatar avatar for user

Video transcript

Say I have a vector v that's a member of Rn. So it's got n components in it. So v1, v2, all the way down to vn. I've touched on the idea before, but now that we've seen what a transpose is, and we've taken transposes of matrices, there's no reason why we can't take the transpose of a vector, or a column vector in this case. So what would v transpose look like? Well if you think of this as a n by 1 matrix, which it is, it has n rows and one column. Then what are we going to get? We're going to have a 1 by n matrix when you take the transpose of it. And this one column is going to turn into the one row. So you're going to have it be equal to v1, v2, all the way to vn. And you might remember, we've already touched on this in a lot of matrices before. Let's say that's some matrix A. We called the row vectors of those matrix, we called them the transpose of some column vectors, a1 transpose, a2 transpose, all the way down to an transpose. In fact, not so many videos ago I had those row vectors, and I could have just called them the transpose of column vectors, just like that. And that would have been, in some ways, a better way to do it because we've defined all these operations around column vectors, so you could always refer to the transpose of the transpose and then do some operations on them. But anyway, I don't want to get too diverted. But let's think a little bit of what happens when you operate this vector, or you take some operation of this vector with some other vectors. So let's say I have another vector here that's w, and it's also a member of Rn. So you have w1, w2, all the way down to wn. There's a couple of things that we're already, I think, reasonably familiar with. You could take the dot product of v and w. v dot w is equal to what? It is equal to v1 times w1, plus v2, w2, and you just keep going all the way to vn, wn. This is the definition of the dot product of two-column vectors. Now, how can we relate that to maybe the transpose of v? Well, we could take the transpose of v-- let me write it this way-- what is-- if I did a matrix multiplication, so I did v1, v2, all the way to vn-- so this is v transpose, that's v transpose-- and I take the product of that with w. So I have w1, w2, all the way down to wn. Now, if I view these as just matrices-- this is w right here-- if I viewed these just as matrices, is this matrix-matrix product well-defined? Over here I have a n by 1 matrix. Here I have a 1-- sorry. Here, the first one I have is a 1 by n matrix-- I have one row and n columns. And here I have an n by 1 matrix. I have n rows and only one column. So this is well-defined. I have the same number of columns here as I have rows here. This is going to result in a 1 by 1 matrix. And what's it going to look like? It's going to equal to v1 times w1-- let me write it like this-- v1, w1 plus v2, w2-- it's only going to have one entry. We could write it is as just a 1 by 1 matrix like that. Let me just do it-- 1 by 1 matrix like that. v1, w1 plus v2, w2-- let me just, I could write v2 there-- plus all the way to vn, wn. That's what it'll be. It'll just be a 1 by 1 matrix that looks like that. But you might notice that these two things are equivalent. So we can make the statement that v dot w, which is the same thing as w dot v, these things are equivalent to-- v dot w is the equivalent of-- let me just write it once over here-- v dot w is the same thing as the transpose of v, v transpose times w as just a matrix-matrix product. So if you view v as a matrix, take its transpose and then just take that matrix and take the product of that with w, it's the same thing as v dot w. So that's an interesting take-away. I guess you could argue somewhat obvious, and we've already been referring this-- when I defined matrix-matrix products, I kind of said you're taking the dot product of each row with each column, and you can see that it really is, it's really the dot product of the transpose of that row with each column, but you got the general idea. But let's see if we can build on this a little bit. Let's say I have some matrix A-- let me save our little outcome that I have there-- let's say I have some-- let me get a good color here-- let's say I have some matrix A and it's an m by n matrix. Now if I were to multiply that times a vector x, so I'm going to multiply it by some vector x-- and let's say that x is a member-- let me write it this way-- x is a member of Rn. So it has n elements. Or another way you could view it is, it's an n by 1 matrix. Now when I take the product of these, what am I going to get? Or another way to say it is, what is the vector Ax? When I take this product, I'm just going to get another vector, and what's it going to be? It's going to be an m by 1 vector. So we could say that Ax is a member of Rm. It's going to have m elements, right? If this was equal to, if you said that Ax equal to, I don't know, let's say it's equal to z, z would have m elements. You would have z1, z2, all the way down to zm. And I know that because you have m rows in A, and you have only one-- well you could say this is m by n, this is n by 1. The resulting product will be m by 1, or it'll be a vector that is a member of Rm-- it'll have exactly m elements. Now, if that's a vector of Rm, then the idea of dotting this with another member of Rm is well-defined. So let's say that I have another member of Rm. Let's say I have a vector y. Let's say y is also a member of Rm. This has-- the vector Ax, the vector that you get when you take this product, has m elements, this has m elements. So the idea of taking their dot product is well-defined. Let me write that. So you could take Ax, that's a vector, and now we are dotting it with this vector right here and we'll get a number. We just take each of their terms, multiply the corresponding terms, add them all up, and you get their dot product. But what is this equal to? We can just use this little, I guess you could call it a result, that we got earlier on in this video. Using this result, the dot product of two matrices-- or sorry, the dot product of two vectors is equal to the transpose of the first vector as a kind of a matrix. So you can view this as Ax transpose. This is a m by 1, this is m by 1. Now this is now a 1 by m matrix, and now we can multiply 1 by m matrix times y. Just like that. Now what is this thing equal to? We saw a while ago, I think it was two or three videos ago, we saw that if we take the product of two matrices and take its transpose, that's equal to the reverse product of the transposes. You switch the order and then take the transposes. So this is going to be equal to-- this purple part-- is going to be equal to x transpose times A transpose times y. And this is just matrix products. These are matrix products. These aren't necessarily vector operations. We're treating all of these vectors as matrices. And of course, we're treating the matrix as a matrix. So what is this equal to? Well we know that matrix products are associative. You could put a parentheses-- right now we have a parentheses around there from there, but we could just take another association. We could say that that is equal to x transpose times these two matrices times each other. This is a vector, but you can represent it as an m by 1 matrix. Times A transpose y. Just like that. Now let's think about what A transpose y is. Let's think about it. A transpose-- we have here a is m by n. What is A transpose? A transpose is going to be n by m, right? It's going to be an m by n. So this is an m by n. And then what is this vector y going to be? This is an m by 1. So when you take this product, you're going to get an n by 1 matrix. Or you could imagine this as a vector that is a member of Rn. So this is a member of Rn. The entire product is going to result with a vector that's a member of Rn. And of course, it's well-defined because this is a 1 by n vector right there. Now we can go back to our identity. We have the transpose of some vector times some other vector-- they have the same, well I guess you could say this has as many horizontal entries as this guy has vertical entries, just like that. So what is this equal to? We just use that identity. This is equal to, the just regular x in this case, instead of x transpose we'll just have x. So this is equal to x dot-- remember, we just un-transpose it, I guess you can view it that way-- dot A transpose y. Which is a pretty neat outcome. We got this being equal to that. We can kind of change the associativity, although we have to essentially change the order a bit and take the transpose of our matrix. So let me re-write that just so that you can remember the outcome. So the two big outcomes of this video are-- I'll rewrite this one up here-- v dot w is equal to the matrix product of v transpose times w. And if I have some matrix-- you assume all of these matrix-vector products are well-defined and all the dot products are well-defined. If I have Ax dot y, some other vector y, this is equivalent to x dot-- you're essentially putting the A with the other vector-- A transpose times y. And this just might be a useful outcome, or a useful result, that we could build upon later in the linear algebra playlist.