If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Introduction to orthonormal bases

Looking at sets and bases that are orthonormal -- or where all the vectors have length 1 and are orthogonal to each other. Created by Sal Khan.

## Want to join the conversation?

• What are the prerequisites for this lesson? How do I determine what other videos I need to watch in order to understand this one?

I have < 1 week (for a Quantum Computing course), it mentions specifically this and one other Linear Algebra topic (eigenvalues/vectors). I've been serially watching every video in the "Linear Algebra" section from the beginning, but there will not be enough time.

So, how to determine what videos I can skip in order to reach this one and be able to understand it? • All these concepts are directly applied to electrons in atoms. In that sense, if I consider Vi be the wave function of i^th electron, is it correct to consider as follow:

Normalized vector Vi : the total probability of finding the electron is 1 (Vi.Vi=1)
No two electrons can be in the same place (becasue Vi.Vj=0)
Vi and Vj are Linearly independent : One electron does not cross each other electrons position.

Please give corrections and suggestions for further reading of basic level for this. • If you have a orhonormal basis set u, then is their inner product <u|u> defined to be 1? • I think you're confusing sets and their elements. An orthonormal basis is a set of vectors, whereas "u" is a vector. Say B = {v_1, ..., v_n} is an orthonormal basis for the vector space V, with some inner product defined say < , >. Now <v_i, v_j> = d_ij where d_ij = 0 if i is not equal to j, 1 if i = j. This is called the kronecker delta. This says that if you take an element of my set B, such as v_1 and consider <v_1 , v_1> then this value must be 1. If the subscript isn't 1 then you will always get zero! The short answer is yes, but you had a slight conceptual mishap in your question.
(1 vote)
• Must a scalar multiple of an orthogonal matrix orthogonal as well? Is this answered in another video? • Do you mean that if "M" is an orthogonal matrix is "kM" orthogonal? If so lets check the definition. I would recommend trying some examples.

"kM" is orthogonal if all of its columns are unit vectors. But if "M" was orthogonal and we multiply a "k" into "M" somewhere it will multiply one of the columns by a scaler that is not 1 so that column will no longer be a unit vector.
(1 vote)
• Is it called "Orthonormal bases" or "Orthonormal basis"?
It was "bases" in the title, but he said and wrote (as at ) "basis"
(1 vote) • For expressing the dot product of the vectors, shouldn't we put the first vector transposed?
(1 vote) • If you treat the vectors as 1-column matrices, then yes, in order to do the dot product you have to put express your first vector as a 1-row matrix. But if you are using normal vector notation (as most of the video does) then you are not committed to the matrix representation of vectors, and as such each vector can be seen as either a 1-column matrix, a 1-row matrix, a tuple of numbers or even as an arrow in space.

In notation, there is no difference between:
``    ⎡a_x⎤a = ⎥a_y⎥    ⎣a_z⎦a = [a_x a_y a_z]``
• If you have a set of 30 vectors in r2 how can they all be orthogonal to each other? It seems like you could have at most 2?
(1 vote) • What are the coordinates for the translation of a triangle given the matrix addition ? Or yet how can I solve the problem?
(1 vote) • If we state that Vi and Vj are orthogonal to each other, how can we say that they are linearly dependent? (I'm confused how we can say Vi = cVj, if the two vectors are orthogonal)
(1 vote) • We wanted to prove that Vi and Vj are linearly independent, and to prove it we used "proof by contradiction", in which we take initial assumption and find some contradiction to that assumption and then negate our initial assumption. That's why here we first assumed that let's say Vi and Vj are NOT linearly independent (that is they are linearly dependent) by writing Vi = cVj and then finding a contradiction which disproved our initial assumption that they are linearly dependent, and that's how we proved that Vi is not equal to cVj and thus Vi and Vj are linearly independent.
(1 vote)
• I dont understand what goes on here
at
you wrote vi.vj=0 (which is true for orthogonal or linearly independent vectors)

then you supposed that vi and vj are linearly dependent and substituted the value of vi=cvj in the equation where the result is only true when the vectors are linearly independent
It will never come out to be true
Where am i going wrong?
(1 vote) 