Hey there! Sign in to join this conversationNew here? Join for free
x Turn on thread page Beta
    • Thread Starter
    Offline

    0
    ReputationRep:
    (Original post by Mrm.)
    Bostock and chandler is good, possibly the best imho.
    With regards to and order DE's , post what you would like to know and I will see what I can come up with.
    I've decided to start a new thread so as not to hijack the thread from which I have quoted you from.

    This one will focus specifically on matrices, as they are what is bothering the most. I have been studying FP3 matrix algebra recently, and what has been bothering me is that pure maths has suddenly reduced to a set of algorithms and formulas (like statistics).

    I could understand the intuition behind trigonometry and calculus but I cannot for this topic.

    For example, how do we know that the inverse of a 2x2 matrix (M) is always

    1/det(M) *
    ( d -b )
    ( -c a )

    What does the determinant of a matrix actually mean? How did we decide to calculate the determinant of a 2x2 matrix using ad - bc?

    Why do we multiply matrices in the way we do?

    And with 3x3 matrices, we have the rule of alternating signs when finding the matrix of cofactors, we need to use the transpose of the matrix of cofactors to find the inverse of the matrix, we only use one row or column to find the determinant of the matrix, the significance of eigenvalues and eigenvectors and so on... It's all just learning algorithms /rant.

    I would appreciate any response to the first 3 questions especially (determinant, inverse and multiplication), from anybody, thanks.
    Offline

    2
    ReputationRep:
    (Original post by _Ravi_)
    I've decided to start a new thread so as not to hijack the thread from which I have quoted you from.

    This one will focus specifically on matrices, as they are what is bothering the most. I have been studying FP3 matrix algebra recently, and what has been bothering me is that pure maths has suddenly reduced to a set of algorithms and formulas (like statistics).

    I could understand the intuition behind trigonometry and calculus but I cannot for this topic.

    For example, how do we know that the inverse of a 2x2 matrix (M) is always

    1/det(M) *
    ( d -b )
    ( -c a )

    What does the determinant of a matrix actually mean? How did we decide to calculate the determinant of a 2x2 matrix using ad - bc?

    Why do we multiply matrices in the way we do?

    And with 3x3 matrices, we have the rule of alternating signs when finding the matrics of cofactors, we need to use the transpose of the matrix of cofactors to find the inverse of the matrix, we only use one row or column to find the determinant of the matrix, the significance of eigenvalues and eigenvectors and so on... It's all just learning algorithms /rant.

    I would appreciate any response to the first 3 questions especially (determinant, inverse and multiplication), from anybody, thanks.
    Why do we do the determinant the way we do?

    The answer to this is this is simply how it is defined. When making up the idea of matrices you are making up a whole new branch of maths. There is a useful thing you get when you work out ad-bc and so someone decided to call it the determinant. It comes in everywhere but perhaps the most obvious example is that it is the scale factor of the change of area when you transform a shape by the matrix.

    Why multiply when how we do?

    Well once again this is the most useful thing to us. If you multiply int he way you do it just so happens that the transforms represented are done one after the other. You could define it differently but it would help us tckle any problems so what would the point of that be?

    Why does the inverse equal what it does?

    Well if you multiply that and the normal matrix you'll see that it equals the identity matrix which is what we need the inverse for.

    hope this helps
    Offline

    16
    ReputationRep:
    (Original post by The Muon)
    Why do we do the determinant the way we do?

    The answer to this is this is simply how it is defined. When making up the idea of matrices you are making up a whole new branch of maths. There is a useful thing you get when you work out ad-bc and so someone decided to call it the determinant. It comes in everywhere but perhaps the most obvious example is that it is the scale factor of the change of area when you transform a shape by the matrix.
    I'd argue that it's more useful that it's the only volume function in some useful sense.

    Why multiply when how we do?

    Well once again this is the most useful thing to us. If you multiply int he way you do it just so happens that the transforms represented are done one after the other. You could define it differently but it would help us tckle any problems so what would the point of that be?
    Again, I'd argue that matrices are linear maps after we pick a basis. Once we accept composition of maps as multiplication we have to define multiplication in that way.

    Why does the inverse equal what it does?

    Well if you multiply that and the normal matrix you'll see that it equals the identity matrix which is what we need the inverse for.
    That's not really a "why". To be honest, I've always found Linear algebra a pretty algorithmic topic anyway
    Offline

    2
    ReputationRep:
    (Original post by SimonM)
    I'd argue that it's more useful that it's the only volume function in some useful sense.



    Again, I'd argue that matrices are linear maps after we pick a basis. Once we accept composition of maps as multiplication we have to define multiplication in that way.



    That's not really a "why". To be honest, I've always found Linear algebra a pretty algorithmic topic anyway
    I was trying to simplify it to a level which I would have understood at A2. I had no idea of linear maps and using compositions of maps as multiplications. But you are right in what you say.
    Offline

    14
    (Original post by _Ravi_)
    This one will focus specifically on matrices, as they are what is bothering the most. I have been studying FP3 matrix algebra recently, and what has been bothering me is that pure maths has suddenly reduced to a set of algorithms and formulas (like statistics).
    It's the same with many areas of mathematics. First you learn how to calculate with the objects you are working with; only later to do you move on to working with the objections abstractly. In primary school, you did calculations like 5+7 and 4*3. Only later did you begin working with a variable like n or x. The same is broadly true with other objects, such as matrices.
    • Wiki Support Team
    Offline

    14
    ReputationRep:
    Wiki Support Team
    For a full answer, see any undergraduate textbook on linear algebra. But here's a rough outline.

    Suppose we have a vector space R^n (that is, the set of n-dimensional vectors with real entries, along with addition and multiplication by scalars in the usual way). This is pretty boring when/ it's just sitting around doing nothing, but we can - in the usual way - consider functions from R^n to itself, in the same way that we like to consider functions from R to itself (e.g. y = x^2). I'll concentrate in this post on linear functions, i.e. functions that behave like "y = mx + c" does - so, in particular, let me be specific and say that I'm going to consider functions f such that f(x + y) = f(x) + f(y), and f(ax) = a*f(x). (Check for yourself that f(0) = 0, i.e. there's no constant term.) Now, obviously we have a nice standard way of writing vectors, i.e.

    \mathbf{x} = \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix},

    and we want a nice way of looking at linear functions. It's not too hard to see that, for a function f (which will take in vectors in R^n and give out vectors in R^n) to be linear, we want

    f(\mathbf{x}) = \begin{pmatrix} f_1(\mathbf{x}) \\ f_2(\mathbf{x}) \\ \vdots \\ f_n(\mathbf{x})\end{pmatrix},

    where each f_i is a linear function taking in the whole vector x and spitting out a real number.

    [For example, we might be working in R^3, and f might be the function sending each vector (x_1, x_2, x_3) to (2x_1, 3x_3, x_2 - x_1). Check that this is of the form above, and check that this satisfies the conditions I gave earlier and said defined a 'linear' function.]

    So each f_i satisfies something looking a bit like f_i(x_1, x_2, \dots, x_n) = c_1 x_1 + c_2 x_2 + \dots + c_n x_n (remember, no constant term). In fact, let me write this as f_i(x_1, x_2, \dots, x_n) = a_{i,1} x_1 + a_{i,2} x_2 + \dots + a_{i,n} x_n, with the 'i' subscript reminding me which 'f' I'm talking about.

    So our linear map looks like:

    f\begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix} = \begin{pmatrix} a_{1,1} x_1 + a_{1,2} x_2 + \dots + a_{1,n} x_n \\ a_{2,1} x_1 + a_{2,2} x_2 + \dots + a_{2,n} x_n \\ \vdots \\ a_{n,1} x_1 + a_{n,2} x_2 + \dots + a_{n,n} x_n \end{pmatrix}.

    That is, we can write:

    \displaystyle (f(\mathbf{x}))_i = a_{i,1} x_1 + a_{i,2} x_2 + \dots + a_{i,n} x_n = \sum_{j=1}^n a_{i,j} x_j .

    Now let me write the a_{i,j} in a table, and call it A:

    A = \begin{pmatrix} a_{1,1} & a_{1,2} & \dots & a_{1,n} \\ a_{1,1} & a_{1,2} & \dots & a_{1,n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & \dots & a_{n,n} \end{pmatrix}.

    Can you see that f(x) is just Ax (using the rather weird definition of matrix multiplication to work out "Ax")? If you're interested further, let g be a linear function with matrix B = (b_{j,k}), and show like I did above by considering f(g(x)) that the matrix AB has (i,k) entry \sum_j a_{i,j} b_{j,k} - the usual definition of matrix multiplication.

    Can you see now why it's a kinda natural thing to do if you approach it from this angle?

    I'm not going to go any further because this is already a massive post and you're probably not going to read it all, but if you are interested further, consider getting a book such as Beardon's "Algebra and Geometry".
 
 
 
Reply
Submit reply
Turn on thread page Beta
Updated: January 22, 2010
Poll
Do you agree with the proposed ban on plastic straws and cotton buds?
Useful resources

Make your revision easier

Maths

Maths Forum posting guidelines

Not sure where to post? Read the updated guidelines here

Equations

How to use LaTex

Writing equations the easy way

Student revising

Study habits of A* students

Top tips from students who have already aced their exams

Study Planner

Create your own Study Planner

Never miss a deadline again

Polling station sign

Thinking about a maths degree?

Chat with other maths applicants

Can you help? Study help unanswered threads

Groups associated with this forum:

View associated groups

The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.

Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE

Write a reply...
Reply
Hide
Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.