Hey there! Sign in to join this conversationNew here? Join for free
x Turn on thread page Beta
    • Thread Starter


    Basically having a lot of difficulty understanding linear maps

    I understand the conditions for which a map is a linear map and the definitions of their properties (rank, kernel, nullity) but after that I don't really know whats going on! So...
    What actually are linear maps - are they just transformations that you can apply to vectors?
    How do you obtain the transformation matrix?
    What do you actually apply this matrix to - the components of the vector or the basis vector?
    If anyone has the Cam Part IA course notes, the remark at the end of section 3.5 confuses me - it says the relations "go in opposite ways":
    i.e A(e{j}) = f{i}A{ij} and x'{i} = A{ij}x{j} ?
    How do linear maps link in to transformation of bases?

    I really don't understand this topic so if you could add anything else outside these questions that would be appreciated. Thanks a lot!
    • Thread Starter

    Anyone? Or can anyone recommend any useful websites on this - I couldn't find any.

    Linear maps are transformations (maps) that can be applied to vectors, but they have to preserve vector addition and scalar multiplication (linearity).
    So if f is a function between two vector spaces (over the same field), then for all vectors x,y in the domain and scalars a in the field, the following must hold in order for f to be a linear map:

    f(x+y) = f(x) + f(y)
    f(ax) = af(x)

    Linear maps are in one-to-one correspondence with matrices, so every n\times n matrix represents a linear map in \mathbb{F}^n where \mathbb{F} is a field. However, the matrix for a linear map will depend on which basis you choose.

    Given a linear map f: V \rightarrow W and a basis in V, \{v_1, \ldots ,v_n\}, the matrix representing the linear map with respect to the given basis is \begin{pmatrix} f(v_1) & \ldots & f(v_n) \end{pmatrix}, i.e. the j^{th} column of the matrix consists of the components, or coordinates, of the image of the j^{th} basis vector.

    Applying the linear map to a vector is just standard matrix multiplication. For example, applying the transformation represented by the matrix \begin{pmatrix} 2 & 1 \\ -1 & 4\end{pmatrix} to the vector \begin{pmatrix} 1 \\ 1 \end{pmatrix} gives:

    \begin{pmatrix} 2 & 1 \\ -1 & 4\end{pmatrix} = \begin{pmatrix} 2 & 1 \\ -1 & 4\end{pmatrix}\begin{pmatrix} 1 \\ 1 \end{pmatrix} = \begin{pmatrix} 3 \\ 3 \end{pmatrix}

    That's not everything you asked for but it should be good overview. If you want to learn more, khanacademy might be useful. Check out its linear algebra section.

    • PS Helper

    PS Helper
    Regarding the last part of your post, Cowley's notes for changes of basis are correct but can be a bit difficult to understand at first.

    The idea is that a linear map takes a load of vectors in a space and does stuff to them, and it will do the same stuff to the same vectors no matter what we call the vectors. But because we can choose different bases for a vector space, we can call the vectors different things. The result is that when you write a matrix, the linear map it represents depends upon the basis with respect to which the matrix is written.

    For simplicity I'll refer to \mathbb{R}^3 here, but you can extend it to \mathbb{R}^n fairly easily.

    At A-level you always picked the "standard basis"; that is, we arbitrarily choose where e_1 = (1,0,0) points and then choose e_2, e_3 so that they're orthogonal to it and obey the right-hand rule. But we don't have to do this; as long as we have three linearly independent vectors, we can uniquely determine a linear map by seeing where they go.

    A matrix tells you where your three basis vectors go when the linear map is applied. Suppose V and W are vector spaces with bases \{ v_1, v_2, v_3 \} and \{ w_1, w_2, w_3 \} respectively, and \alpha : V \to W is a linear map. Let A represent the matrix for \alpha with respect to these two bases. Then the first column of A tells you where v_1 goes to, represented as a linear combination of the w_i basis vectors, and similarly for the 2nd and 3rd columns. So for example if A = \begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 3 & 1 & 1 \end{pmatrix} then this tells you that \alpha(v_1) = w_1 + 2w_2 + 3w_3, \alpha(v_2) = w_2 + w_3 and \alpha(v_3) = w_3.

    It's sometimes useful, however, to be able to change basis, and we do this using a change of basis matrix. First consider the simple case when V=W and v_i=w_i, so that you're going from the same space to the same space, and you use the same basis vectors on both sides. So we're working with a vector space V with basis \{ v_1, v_2, v_3 \}. Suppose we have another basis for V which looks like \{ q_1, q_2, q_3 \}. Because \{ v_1, v_2, v_3 \} is a basis, we can find p_{ij} such that:
    q_1 = p_{11}v_1 + p_{12} v_2 + p_{13} v_3
    q_2 = p_{21}v_1 + p_{22} v_2 + p_{23} v_3
    q_3 = p_{31}v_1 + p_{32} v_2 + p_{33} v_3

    We can represent this by a (3x3) matrix P = (p_{ij}). Because this matrix is essentially just 'relabelling' the q_i to become the v_i, rather than actually transforming any vectors, P represents the identity transformation from V with the q_i basis to V with the v_i basis.

    So suppose we want to represent \alpha with respect to the q_i basis. We know that \alpha is represented by A w.r.t. the v_i basis, and we know how to turn the q_i into v_i (i.e. using P and so P^{-1} turns the v_i into q_i). So let A' be the matrix for \alpha w.r.t. the q_i basis. Then it makes sense that:

    A' = (v \to q) ( v \text{-matrix for } \alpha ) (q \to v)

    That is (because we work from right to left), we take in a q_i vector, turn it into a v_i vector, apply the matrix for \alpha w.r.t. the v_i basis, and then turn the resulting vector back into q_i-basis form.

    But we already know what the three components are by the work we did above; so we get A'=P^{-1}AP.

    That's really all there is to it! When you're going from a space with one basis into a space with a different basis it's more complicated, and you get something in the form A'=Q^{-1}AP... but if you can understand this it might make Cowley's notes seem clearer, and then you'll be able to move onto the next bit.

    Nice post Nuodai.

    Just to add what I think:
    I remember Cowley's notes being fairly algebra-heavy at some points. I think the best thing to do at this point is not to worry about the specifics of the algebra at the expense of forgetting the bigger picture - you need to have an "intuitive" feel for what's going on if you're going to understand and remember the subtler points of the algebra, in my opinion.
    • Thread Starter

    Thanks so much everyone
Have you ever experienced racism/sexism at uni?
Useful resources

Make your revision easier


Maths Forum posting guidelines

Not sure where to post? Read the updated guidelines here


How to use LaTex

Writing equations the easy way

Student revising

Study habits of A* students

Top tips from students who have already aced their exams

Study Planner

Create your own Study Planner

Never miss a deadline again

Polling station sign

Thinking about a maths degree?

Chat with other maths applicants

Can you help? Study help unanswered threads

Groups associated with this forum:

View associated groups

The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.

Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE

Write a reply...
Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.