x Turn on thread page Beta
 You are Here: Home >< Maths

# Linear Maps - Vectors & Matrices watch

1. Hey,

Basically having a lot of difficulty understanding linear maps

I understand the conditions for which a map is a linear map and the definitions of their properties (rank, kernel, nullity) but after that I don't really know whats going on! So...
What actually are linear maps - are they just transformations that you can apply to vectors?
How do you obtain the transformation matrix?
What do you actually apply this matrix to - the components of the vector or the basis vector?
If anyone has the Cam Part IA course notes, the remark at the end of section 3.5 confuses me - it says the relations "go in opposite ways":
i.e A(e{j}) = f{i}A{ij} and x'{i} = A{ij}x{j} ?
How do linear maps link in to transformation of bases?

I really don't understand this topic so if you could add anything else outside these questions that would be appreciated. Thanks a lot!
2. Anyone? Or can anyone recommend any useful websites on this - I couldn't find any.
3. Linear maps are transformations (maps) that can be applied to vectors, but they have to preserve vector addition and scalar multiplication (linearity).
So if f is a function between two vector spaces (over the same field), then for all vectors x,y in the domain and scalars a in the field, the following must hold in order for f to be a linear map:

Linear maps are in one-to-one correspondence with matrices, so every matrix represents a linear map in where is a field. However, the matrix for a linear map will depend on which basis you choose.

Given a linear map and a basis in , , the matrix representing the linear map with respect to the given basis is , i.e. the column of the matrix consists of the components, or coordinates, of the image of the basis vector.

Applying the linear map to a vector is just standard matrix multiplication. For example, applying the transformation represented by the matrix to the vector gives:

That's not everything you asked for but it should be good overview. If you want to learn more, khanacademy might be useful. Check out its linear algebra section.

4. Regarding the last part of your post, Cowley's notes for changes of basis are correct but can be a bit difficult to understand at first.

The idea is that a linear map takes a load of vectors in a space and does stuff to them, and it will do the same stuff to the same vectors no matter what we call the vectors. But because we can choose different bases for a vector space, we can call the vectors different things. The result is that when you write a matrix, the linear map it represents depends upon the basis with respect to which the matrix is written.

For simplicity I'll refer to here, but you can extend it to fairly easily.

At A-level you always picked the "standard basis"; that is, we arbitrarily choose where points and then choose so that they're orthogonal to it and obey the right-hand rule. But we don't have to do this; as long as we have three linearly independent vectors, we can uniquely determine a linear map by seeing where they go.

A matrix tells you where your three basis vectors go when the linear map is applied. Suppose and are vector spaces with bases and respectively, and is a linear map. Let represent the matrix for with respect to these two bases. Then the first column of tells you where goes to, represented as a linear combination of the basis vectors, and similarly for the 2nd and 3rd columns. So for example if then this tells you that , and .

It's sometimes useful, however, to be able to change basis, and we do this using a change of basis matrix. First consider the simple case when and , so that you're going from the same space to the same space, and you use the same basis vectors on both sides. So we're working with a vector space with basis . Suppose we have another basis for which looks like . Because is a basis, we can find such that:

We can represent this by a (3x3) matrix . Because this matrix is essentially just 'relabelling' the to become the , rather than actually transforming any vectors, represents the identity transformation from with the basis to with the basis.

So suppose we want to represent with respect to the basis. We know that is represented by w.r.t. the basis, and we know how to turn the into (i.e. using and so turns the into ). So let be the matrix for w.r.t. the basis. Then it makes sense that:

That is (because we work from right to left), we take in a vector, turn it into a vector, apply the matrix for w.r.t. the basis, and then turn the resulting vector back into -basis form.

But we already know what the three components are by the work we did above; so we get .

That's really all there is to it! When you're going from a space with one basis into a space with a different basis it's more complicated, and you get something in the form ... but if you can understand this it might make Cowley's notes seem clearer, and then you'll be able to move onto the next bit.
5. Nice post Nuodai.

Just to add what I think:
I remember Cowley's notes being fairly algebra-heavy at some points. I think the best thing to do at this point is not to worry about the specifics of the algebra at the expense of forgetting the bigger picture - you need to have an "intuitive" feel for what's going on if you're going to understand and remember the subtler points of the algebra, in my opinion.
6. Thanks so much everyone

TSR Support Team

We have a brilliant team of more than 60 Support Team members looking after discussions on The Student Room, helping to make it a fun, safe and useful place to hang out.

This forum is supported by:
Updated: April 12, 2011
Today on TSR

### How do I turn down a guy in a club?

What should I do?

Poll
Useful resources

### Maths Forum posting guidelines

Not sure where to post? Read the updated guidelines here

### How to use LaTex

Writing equations the easy way

### Study habits of A* students

Top tips from students who have already aced their exams