You are Here: Home >< Maths

# Matrices watch

1. (Original post by grace_)
I'm glad someone can understand this!
What does the statements above (and the and symbols mean?)
Hehe, that spoiler was more for David to look at to see what had slipped in my mind. I know my explanation was bad; because I don't really grasp this good enough myself to be confident with it.

is the same as grad (it's an operator, not the derivative)
is just a symbol, but usually used to describe a small increase in something (in the same way or are), or occasionally an area.

edit:
Apologies as I can't get the c . T statement to show up in LaTex.
\cdot should be it
2. is a symbol for the vector differential operator Del,

basically

3. (Original post by Zhen Lin)
Well, { i, j, k } form a basis for 3D space, but not for the reasons you describe. A basis is a linearly independent set (that is, there are no non-trivial solutions for scalars to the equation where are the elements in the set) of vectors that spans the space, that is, all vectors in the space are linear combinations of those basis vectors.
What's a trivial solution? One where all the relevant values are zero?

(Original post by generalebriety)
A more intuitive idea of a basis than Zhen's explanation above is a set of vectors (think about our 'space' as normal 3D space if it helps) that you can make any vector in your space out of (i.e. they're a spanning set) and where you have no superfluous vectors (i.e. they're not linearly dependent).
Oh, I understand. By adding together multiples of the vectors in the set you will be able to get to any point in the space.

(Original post by generalebriety)
A set of vectors is linearly dependent if there's one of them you can remove that wouldn't help you 'reach' any other vectors in the space
How does this tie in to a set of equations (vectors?) being linearly dependent if the determinant of the matrix - presumably the matrix consisting of these vectors - is zero?

(Original post by generalebriety)
(this obviously places an upper bound on the number of vectors you can have - if you take four vectors that span 3D space, one of them is always superfluous because you can always remove one and make any vector you like out of the other three). A set of vectors is a spanning set if you can make any vector in the space you like by scaling vectors in that set and adding them together (this obviously places a lower bound on the number of vectors you can have - if you take two linearly independent vectors in 3D, there'll always be some vectors you can't make, e.g. the vector defined by their cross product). A set of vectors is a basis for a space if they are linearly independent and span the space.
The cross product of two vectors gives you a vector which is perpendicular to the original two, is that correct? So you wouldn't be able to reach that vector by combining (adding multiples of) the original 2 vectors in 3D space.

So how do you know that a (number of) vectors span a space??

Is there a book somewhere, with numerical examples in, that I could look at? I haven't had anything to think about that's a concrete example that I can work through myself to see if I've got my head around this. Sorry. I'm probably being a bit slow, but I think I understand most of the idea but I don't quite know it as I haven't tried it out for myself.
4. (Original post by grace_)
What's a trivial solution? One where all the relevant values are zero?
Yes.

How does this tie in to a set of equations (vectors?) being linearly dependent if the determinant of the matrix - presumably the matrix consisting of these vectors - is zero?
You know that if the determinant is nonzero, then the matrix is invertible. Therefore there are unique solutions to , i.e. you can represent any vector v by a linear combination u of basis vectors (the columns of M).

So how do you know that a (number of) vectors span a space??
The minimum number of vectors you need to span a space is the dimension of the space. To prove that a set of vectors span a space... well, if the number of vectors is equal to the dimension, then you can use the matrix determinant. If not, there's something called row reduction you can use to find out which ones a linearly independent and how many.
5. (Original post by insparato)
is a symbol for the vector differential operator Del,

basically

then

but what is this thing, as a physical idea? Is it the gradient of the function f(x, y, z) in some sense?
6. Yes, if the given function f(x,y,z) represents a scalar field. So if you plop numbers in x,y,z you will get a scalar value. Grad F is the gradient essentially.

I think its easier understood if you deal with scalar and vector fields in terms of physical things.

A scalar field is where a scalar value is associated with every point in space.

So take a room with a radiator, different parts of the room are hotter than others. So the room represents space, different points in the room correspond to a different temperatures (which is the scalar value).

Take the room again, and take a point in the room. The gradient at this point will show you the direction of which the temperature rises quickest(so if you look at Grad F, it is a vector and it always points in the direction where it scalar values rise quickest). The magnitude of the gradient will show you how fast the temperature will rise.

I think Billy might have my nuts off, i've diverged pretty far from discussion but if you're interested you can PM me, or one of the others(probably better) to find out more.
7. (Original post by grace_)
I'm glad someone can understand this!
Though I think nota might be on his own on this one. I don't understand it... (Though I think as much as anything this is because the discussion is scattered over about 20 posts. I looked at it, thought, "what's h supposed to be again?", scrolled up to the top and couldn't see it in any post and lost the will to carry on).
8. (Original post by Zhen Lin)
The minimum number of vectors you need to span a space is the dimension of the space. To prove that a set of vectors span a space... well, if the number of vectors is equal to the dimension, then you can use the matrix determinant. If not, there's something called row reduction you can use to find out which ones a linearly independent and how many.

So if you need three linearly independent vectors to reach any given point in the space (and exactly and only three vectors) then you have a three-dimensional space?

I think I remember doing something about row reduction when learning about finding the determinant of a matrix - you can add/subtract any multiple of a row from any other row without changing the value of the determinant? Is that a similar idea to being able to see if one of the vectors you've got is linearly dependent on the others, or not?
9. Correct - a smallest spanning set is a largest linearly independent set and is a basis, so it has as many vectors as there are dimensions.
10. (Original post by grace_)
What's a trivial solution? One where all the relevant values are zero?
Yes.
The cross product of two vectors gives you a vector which is perpendicular to the original two, is that correct? So you wouldn't be able to reach that vector by combining (adding multiples of) the original 2 vectors in 3D space.
In fact, no. Adding must not give (where ). Remember that a and b are in R^2, whilst c will be in R^3

Is there a book somewhere, with numerical examples in, that I could look at? I haven't had anything to think about that's a concrete example that I can work through myself to see if I've got my head around this. Sorry. I'm probably being a bit slow, but I think I understand most of the idea but I don't quite know it as I haven't tried it out for myself.
I've got a couple of linear algebra books in pdf that I could send, but be prepared that they're dense and contains a lot theorems and proofs (first year undergrad stuff mainly, but bits should be understandable with good a-level knowledge), and maybe not so many exercises. PM me if you want any
You're not being slow! You grasp things in this thread in 24hrs that takes me at least a week to understand
11. (Original post by grace_)
The cross product of two vectors gives you a vector which is perpendicular to the original two, is that correct? So you wouldn't be able to reach that vector by combining (adding multiples of) the original 2 vectors in 3D space.
(Original post by nota bene)
In fact, no. Adding must not give (where ). Remember that a and b are in R^2, whilst c will be in R^3
Isn't that what I said? If we had two vectors in and found their cross product then the vector that we get will be perpendicular to the first two (in ). So you would not be able to get the third vector by adding multiples of the original 2 vectors.
12. Sorry to bother you all, but we've all been using this word "span" and I have no real idea what it means?? How do you know if a set of vectors spans a space?
13. (Original post by DFranklin)
Hey, I didn't start it. Blame nota...

(Grace isn't exactly a normal GCSE student either...)
(Original post by nota bene)
That's right, I'll take the blame!

This time we littered a uni student's thread, with something relatively relevant to the topic (at least we're still discussing eigenvalues). And as said, Grace isn't the average student in year 11... Billy will you let us all off?
Grace is quite happy and is also grateful to you all for taking the time to explain things - thank you very much for all the help! I might not be completely or even partially clued up on it all yet, but I still appreciate the help and advice.
14. (Original post by DFranklin)
Though I think nota might be on his own on this one. I don't understand it... (Though I think as much as anything this is because the discussion is scattered over about 20 posts. I looked at it, thought, "what's h supposed to be again?", scrolled up to the top and couldn't see it in any post and lost the will to carry on).
h is supposed to be the function and g the constraint. I should probably go find the online lecture notes we're given on this; I was going on my own notes and taking things from my memory. I'd link you to the notes, but unfortunately they are in Swedish...
15. (Original post by grace_)
Isn't that what I said? If we had two vectors in and found their cross product then the vector that we get will be perpendicular to the first two (in ). So you would not be able to get the third vector by adding multiples of the original 2 vectors.
It very much is exactly what you said;yes; My apologies, I somehow read 'would' instead of "wouldn't".

edit:
Sorry to bother you all, but we've all been using this word "span" and I have no real idea what it means?? How do you know if a set of vectors spans a space?
http://en.wikipedia.org/wiki/Linear_span#Definition (ask if there's something you don't understand )
16. I apologise if for being pedantic, but:
1. A 2D vector is not a 3D vector. That is to say, and are not the same kinds of vector.
2. You can't take the cross product of 2D vectors. The cross product is only defined in .
17. Um.
1. So the span of a set of vectors is basically the set of points that can be reached using linear combinations of those vectors?
2. So a set of vectors spans a space if you can reach any point in the space using a linear combination of those vectors?
3. If you have the minimum number of vectors in that set, then they form a basis for the space?
4. The basis is orthogonal if all those vectors are perpendicular to one another.
5. The basis is orthonormal if all those vectors are perpendicular to one another and have length 1.
6. "(Three) vectors are linearly independent" means that you cannot make the third vector by adding together (multiples of) the first two.

Corrections?
18. (Original post by Zhen Lin)
I apologise if for being pedantic
Not pedantic at all, rather correcting all my mistakes Thanks, and I'll leave the vector stuff for you to explain, because to say the least vectors is really not my topic. Analysis/calculus and a bit of matrices, but no vectors thanks...

edit: Your 6 points above seem all correct Grace.
19. (Original post by grace_)
How does this tie in to a set of equations (vectors?) being linearly dependent if the determinant of the matrix - presumably the matrix consisting of these vectors - is zero?

I'll assume throughout that M is 3x3. It's quite easy to show that if a matrix M = (c1 c2 c3) is made of three column vectors, with components:

then adding a multiple of one column to another won't change the determinant. So, for example,

|M| = |c1 c2 c3| = |(c1-3c2) c2 c3|

i.e.

You can also swap columns without affecting the determinant much (i.e. the magnitude of the determinant stays the same, but each time you swap two columns the determinant gains a minus sign).

It's also obvious that if M has a zero column, the determinant will be zero. So if you can add together linear combinations of the three column vectors to get the zero vector (this is the definition of "linearly dependent"), you can do this column reduction to M and get a zero column, and the determinant is zero. A similar argument can be applied to the rows, but the following is much more interesting and important.

Let's assume we've done this 'column reduction' with a specific aim in mind: we're gonna add multiples of c_2 and c_3 to the first column to try and get a column of zeros. If we can't do that, we're gonna do the best we can and try and just leave one number in the top left corner.

(We can always get at least two zeros. Why? Well, consider (x_2, x_3), (y_2, y_3), (z_2, z_3) informally as 2D vectors. There's obviously one superfluous one, so we can always take two of them and make the third out of it, i.e. a linear combination of them will always give the zero vector. Just do this to the bottom six elements of the matrix, and ignore what happens to the top three while you're doing so.)

Then, after doing that, we're gonna add multiples of (the new) c_1 and c_3 to the second column to get as many zeros as we can. (We can always get at least one zero by the same argument as above - we have (y_3) and (z_3), two one-dimensional column vectors, and it's obvious that you can multiply z_3 by something to get y_3.) The type of matrix we get at the end we call 'upper triangular' - and you can see why, because the lower-left three elements of the matrix (which form a triangle ) are zero. Now, count the number of non-zero columns - this is called the column rank of M. If there's at least one non-zero number in each column, its rank is 3; if there's one zero column, its rank is 2; if there are two zero columns, its rank is 1; if there are three zero columns, its rank is 0. (See examples of column reduction in the spoiler below.)

Spoiler:
Show

In the first transformation, I've subtracted the second column from the first; then in the second transformation, I've subtracted the third column from the first. This leaves me my three zeroes in the bottom left. Upper triangular form. No zero columns: column rank = 3.

I'm sure you can work out the transformations yourself. One zero column, so column rank = 2.

I'll leave you to do the column reduction. (Note that the second column is a multiple of the first.) Column rank = 1.

Column rank = 0. For obvious reasons.

Now, if we do the same thing to the rows (just using row reduction), we also get a row rank. The amazing thing is that row rank is always equal to column rank, and this is called the rank of the matrix. The rank of the matrix is the number of linearly independent vectors (read as columns or rows, it doesn't matter) it contains; the rank is also therefore the dimension of the space that these vectors span. The determinant is therefore non-zero (and hence the matrix is invertible) if and only if the rank of the matrix is the dimension of the space it's working in.

We can also define the kernel of the matrix M as the set of all vectors v with Mv = 0, and this defines a legitimate subspace of R^3 or whatever space we're working in. (By "subspace", I just mean it defines a nice point or line or plane or other 'nice' space through the origin; rather than there just being a few vectors dotted around R^3 which satisfy Mv = 0.) As this is a subspace, we can talk about its dimension (i.e. the dimension of the space that the vectors in the kernel span), and this is called the nullity. In addition to our stuff above, we also have the rank-nullity theorem operating, which states that the rank of a matrix plus its nullity is the dimension of the space (e.g. 3 here).

Example:

Spoiler:
Show

has rank 2. So by the rank-nullity theorem, 2 + nullity = 3, and so nullity = 1, so the kernel has dimension 1 (i.e. is a line). Which line? Well, I said before it had to go through the origin (because M0 = 0...). By looking at the column reduction we did, or solving simultaneous equations, or guessing, you can also work out that (-1, 1, 1) is in the kernel. But then if M(-1, 1, 1) is zero, so is M times any multiple of (-1, 1, 1) (note: including 0 times it, which is just 0). So the kernel is the line in the direction (-1, 1, 1) through 0.

Take the rank 1 matrix we had earlier in the previous spoiler; nullity = 2. Its kernel contains (0, 0, 1) and (1, -2, 0) (again by guessing, or more sophisticated stuff), because if you multiply the matrix by either of these you get zero. These define a plane.

Take the rank 0 matrix we had earlier (the zero matrix). Its kernel contains (1, 0, 0), (0, 1, 0), (0, 0, 1) and is therefore a basis for R^3.

In summary, if it's a 3x3 matrix, then the following statements are all equivalent:
- the rank is 3
- the determinant isn't 0
- the matrix is invertible
- the three column vectors that form the matrix are linearly independent and spanning / a basis for R^3
- the three row vectors that form the matrix are linearly independent and spanning / a basis for R^3
- the kernel is 0-dimensional, and therefore contains only the 0 vector.

[/geekery]

Hope I haven't made any mistakes there...

Edit 10000: Incidentally, if you look on MIT OpenCourseWare (google it; videoed lectures, basically), they have a good (fairly slow-paced) linear algebra course which explains this stuff over about 20 hours, broken up into lots of small lectures. Have a look if you're interested. It goes into quite a bit of detail - a bit more than my vectors and matrices course did last term. Ok, should go to bed now - last day of lectures tomorrow. Yawn...
20. (Original post by insparato)
I think Billy might have my nuts off, i've diverged pretty far from discussion but if you're interested you can PM me, or one of the others(probably better) to find out more.
The thread has gone hugely spammy. This annoys me a lot less than many threads, though, because the OP isn't (say) an A-level student who doesn't understand a word we're saying, and he has received help and answers to his questions. Being a maths student at uni, he might even find a lot of this interesting. I don't have anything against discussions drifting, I just like to make sure we're helpful to the OP and don't go off topic / answer at an inappropriate level.

That said, I've just realised grace_ is a year 11 student, and so a lot of my above post might be completely inappropriate. Ah well... I think I've tried to give a fairly intuitive approach rather than a stone cold hard theory approach, so hopefully she'll understand. Might make a few additions, though.

(After all, intuition is what maths is about. Proof is, in the end, not what gets results; it's a formality that mathematicians are particularly proud of, but no mathematician seriously thinks in epsilons and deltas naturally unless they have to...)

TSR Support Team

We have a brilliant team of more than 60 Support Team members looking after discussions on The Student Room, helping to make it a fun, safe and useful place to hang out.

This forum is supported by:
Updated: March 17, 2008
Today on TSR

### Don't know what to study?

These A-levels give you the most options

### University open days

• University of East Anglia
Fri, 4 Jan '19
• University of Lincoln
Mini Open Day at the Brayford Campus Undergraduate
Mon, 7 Jan '19
• Bournemouth University
Wed, 9 Jan '19
Poll
Useful resources

### Maths Forum posting guidelines

Not sure where to post? Read the updated guidelines here

### How to use LaTex

Writing equations the easy way

### Study habits of A* students

Top tips from students who have already aced their exams