If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Defined matrix operations

Sal discusses the conditions of matrix dimensions for which addition or multiplication are defined. Created by Sal Khan.

Want to join the conversation?

  • ohnoes default style avatar for user Abhinav
    what about matrix division
    (12 votes)
    Default Khan Academy avatar avatar for user
    • hopper cool style avatar for user Mr. Jones
      We do not talk about matrix division. Rather, we multiply by inverse matrices. You can do the same thing with real numbers... In fact, my high school algebra II teacher always said that there was no such thing as division -- only multiplying by inverses.

      For example, instead of writing 6 / 3 = 2, you can write 6*(1/3) = 2. So my old teacher was right. You can do everything in mathematics without division.

      And this is what we do with matrices, because not all matrices have inverses, meaning you cannot "divide" by any matrix that you wish. You can only "divide" by a matrix with an inverse. So instead, we just multiply by those inverses, just like my silly example with whole numbers illustrates.
      (46 votes)
  • orange juice squid orange style avatar for user Abraham George
    Who thought of matrices first?
    (10 votes)
    Default Khan Academy avatar avatar for user
  • male robot hal style avatar for user Gediminas Gineitis
    OK, so as far as I understand, one can multiply 2 matrices if:
    a) they both have the same dimensions (e.g., [2x3] and [2x3], [1x2] and [1x2] and so on), OR
    b) the number of columns of the first matrix is equal to the number of rows of the second,
    RIGHT?
    If so, then how does one multiply, e.g., following matrices: [1x3] and [1x3], or [1x2] and [1x2]?
    (5 votes)
    Default Khan Academy avatar avatar for user
    • duskpin ultimate style avatar for user Natalie
      Not necessarily. You had part b right, but you can't always multiply two matrices with the same dimensions. Take your example [1x3] *[1x3]. They are both 1x2 matrices. Since the the number of columns of the first matrix isn't equal to the number of rows of the second (2 and 1) this operation is undefined. :D Hope this helps!
      (11 votes)
  • leaf green style avatar for user Arya
    Well, about the addition part, I read on wikipedia that matrix of m rows and n columns when added with matrix of p rows and q columns will form a matrix of m+p rows and n+q columns. It is also written that it is defined.
    (2 votes)
    Default Khan Academy avatar avatar for user
    • leafers ultimate style avatar for user KrisSKing
      Something is wrong here. In matrix addition, you can only add matrices that have the same dimensions, and the resulting matrix has the same dimensions. So a matrix with m rows and n columns can only be added to a matrix that also has m rows and n columns. The result of such an addition is a matrix with m rows and n columns.
      (5 votes)
  • blobby green style avatar for user Ahmed Nasret
    @
    As the order matters in matrices multiplication, does it matter because one of the matrices is considered as the (processor) which processes the other one, while we might call the other one as (input)? if this makes sense, let me know at which matrix we might call the processor, A or E? if you consider "A" as the one which process the other "E" then i am ok, but if you consider "E" as the processor i actually in a trouble.
    what is my trouble? let me ask you for help
    i cant get satisfied with the rule (multiplication is defined as long as the middle two number are the same) i even didn't used at Khan Academy to use such tools. but what satisfying and give me confident is to understand, and the thing i understand regarding this matter is as the following:
    the processing matrix expands or transform or distribute each raw element exists in the "input matrix" into a column, and it is not matter how many raws are in thus new columns, it is new distribution. the most important is the processor matrix should have a columns congruent to the number of the raws in the input matrix, this makes enough processing rooms for each input, no more rooms allowed and ne less rooms allowed, either case makes confusion and hence the operation will not be defined. but as long as the processing matrix has columns of the exact number of the raws in the input matrix then the operation is defined.
    and that is why i care about which matrix A or B you consider as "processor" matrix, if A is the processor then AE is not defined and i am ok because A has more processing room than required and this is not acceptable, but if you consider E is the processing one, this blow a big question mark here because E actually could process A, it has 2 columns to process each raw elements of A.
    (2 votes)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user kubleeka
      There are several ways to think of matrices, but I'm not convinced that thinking of one as 'processing' or 'acting on' the other is a very useful one.

      You can think of matrices as transformations of space. Say we have a 2x3 matrix with 2 rows and 3 columns. The fact that there are 3 columns means the domain of the transformation is ℝ³. We interpret the matrix as a list of 3 column vectors, each of which is 2-dimensional. The matrix is sending <1, 0, 0> to the left vector, <0, 1, 0> to the middle vector, and <0, 0, 1> to the right vector. Because they're being mapped to 2D vectors, the range of the transformation is ℝ².

      This is why we need the dimensions of the matrices to match up in order to multiply them; matrix multiplication is just function composition. If matrix A is a function from ℝ³ to ℝ², then whatever function (matrix) we apply after applying A had better have a domain of ℝ², or else nothing is well-defined.
      (4 votes)
  • male robot hal style avatar for user ThatRaisinTho (Millionare)
    How can EA be defineable, but not AE. I know why- the rows of the first matrix has to be the same as the columns of the second matrix- but why does that matter?
    (2 votes)
    Default Khan Academy avatar avatar for user
    • starky sapling style avatar for user emilyclarachu
      Actually, it's the other way around -- the number of columns of the first matrix has to be the same as the number of rows of the second matrix. The way we have defined matrix multiplication means that each value in the resulting matrix is determined by the dot product of a row from the first matrix and a column from the second matrix. For example, this product [A]*[B]

      [0 1] * [1 1 / 2 0]

      would have 1 row and 2 columns.

      The first value of AB would be 0*1 + 1*2 -- the dot product of two vectors, A's first row and B's first column. Therefore, we can see that a dot product requires the same number of values in the multiplicand and the multiplier, or the same number of columns in the first matrix and rows in the second matrix.

      If, for example, the second matrix was [1 1], with only one row, AB could not be defined, because there is no way to multiply each value of [0 1] with each value of [1]. 0 (in A) could multiply 1 (in B), but the remaining 1 in A would not have a corresponding multiplier (0*1 + 1*??). The number of columns in A has to be the same as the number of rows in B simply because of the rules of matrix multiplication -- there is no way to create a definable product if these two numbers are different. Hope this helps!
      (1 vote)
  • starky sapling style avatar for user ralph117
    does this means that the commutative property for multiplication does not work for matrices?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • hopper cool style avatar for user ★✮✶Arsh Pervez✶✮★
    How do you find out if subtraction of matrices is defined?
    (1 vote)
    Default Khan Academy avatar avatar for user
  • male robot hal style avatar for user Enn
    Is exponentiation of matrices a defined operation ? Also can matrices themselves be exponents of a number ?
    (1 vote)
    Default Khan Academy avatar avatar for user
    • leaf green style avatar for user ArDeeJ
      Matrix exponentials are defined (for square matrices) using the series expansion of the exponential:
      e^X = I + X + X^2/2! + X^3/3! + X^4/4! + ...

      Some properties are different: for example, e^A e^B = e^(A + B) if and only if AB = BA.

      I'd imagine X^Y would be defined as e^(Y log X). Don't quote me on that, though.
      (2 votes)
  • aqualine ultimate style avatar for user Abrar Farhan Zaman
    can we transpose matrix E= [ -1 2] into E= [ -1/2(by which I mean the 2 is below -1)] ?
    (1 vote)
    Default Khan Academy avatar avatar for user

Video transcript

So we have matrix D and matrix B and they ask us is DB defined? Is the product D times B defined? So D times B is going to be defined is if-- let me make this very clear. This is how I think about it. So let me copy and paste this so I can do this on my scratch pad. So to answer that question, get out the scratch pad right over here. Let me paste the question right here. So let's think about these two matrices. You first have matrix D. I'll do this a nice bold D here. And it has three rows and three columns. So it is a 3 by 3 matrix. And then you'll want to multiply that times matrix B. Matrix B is a 2 by 2 matrix. The only way that we know to define matrix multiplication is if these middle two numbers are the same. If the number of columns D has is equal to the number of rows B has. Now in this case, they clearly do not equal each other, so matrix multiplication is not defined here. So let's go back there and say no. No, DB is not defined. Let's do a few more of these examples. So then we have a 2 by 1, you could view this is as a 2 by 1 matrix or you could view this as a column vector. This is another 2 by 1 matrix, or a column vector. Is C plus B defined. Well, matrix addition is defined if both matrices have the exact same dimensions, and these two matrices do have the exact same dimensions. And the reason why is because with matrix addition, you just add every corresponding term. So in the sum the top, it'll actually be 4 plus 0 over negative 2 plus 0, which is still just going to be the same thing as this matrix up here. But what they're asking is this defined? Absolutely, these both are 2 by 1 matrices, so yes, it is defined. Let's do one more. So once again, they're asking us is the product A times E defined? So here you have a 2 by 2 matrix. Let me copy and paste this just so we can make sure that we know what we're talking about. So get my scratch pad out. So this top matrix right over here, so matrix A is a 2 by 2 matrix. And matrix E, so we're going to multiply it times matrix E, which has one row and two columns. So in this scenario once again, the number of rows-- sorry-- the number of columns matrix A has is two and the number of rows matrix E has is one, so this will not be defined. These two things have to be the same for them to be defined. Now, what is interesting is, if you did it the other way around, if you took E times A, let's check if this would have been defined. Matrix E is 1 by 2, one row times two columns. Matrix A is a 2 by 2, two rows and two columns, and so this would have been defined. Matrix E has two columns, which is exactly the same number of rows that matrix A has. And this really hits the point home that the order matters when you multiply matrices. But for the sake of this question, is AE defined? No, it isn't. And so we can check our answer, no, it isn't.