Turn on thread page Beta
    Offline

    13
    ReputationRep:
    (Original post by grace_)
    I'm glad someone can understand this! :p:
    What does the statements above (and the  \Delta and  \nabla symbols mean?)
    Hehe, that spoiler was more for David to look at to see what had slipped in my mind. I know my explanation was bad; because I don't really grasp this good enough myself to be confident with it.

    \nabla is the same as grad (it's an operator, not the derivative)
    \Delta is just a symbol, but usually used to describe a small increase in something (in the same way \delta or \epsilon are), or occasionally an area.

    edit:
    Apologies as I can't get the c . T statement to show up in LaTex.
    \cdot should be it
    Offline

    13
    ReputationRep:
     \nabla is a symbol for the vector differential operator Del,

    basically \displaystyle \nabla = i\frac{\partial}{\partial x} + j\frac{\partial}{\partial y} + k \frac{\partial}{\partial z}

    \displaystyle Grad F = \nabla f =  i\frac{\partial f}{\partial x} + j\frac{\partial f}{\partial y} + k \frac{\partial f}{\partial z}
    Offline

    2
    ReputationRep:
    (Original post by Zhen Lin)
    Well, { i, j, k } form a basis for 3D space, but not for the reasons you describe. A basis is a linearly independent set (that is, there are no non-trivial solutions for scalars a_i to the equation a_1 \mathbf{e}_1 + a_2 \mathbf{e}_1 + \cdots + a_n \mathbf{e}_n = \mathbf{0} where \mathbf{e}_i are the elements in the set) of vectors that spans the space, that is, all vectors in the space are linear combinations of those basis vectors.
    What's a trivial solution? One where all the relevant values are zero?


    (Original post by generalebriety)
    A more intuitive idea of a basis than Zhen's explanation above is a set of vectors (think about our 'space' as normal 3D space if it helps) that you can make any vector in your space out of (i.e. they're a spanning set) and where you have no superfluous vectors (i.e. they're not linearly dependent).
    Oh, I understand. By adding together multiples of the vectors in the set you will be able to get to any point in the space.

    (Original post by generalebriety)
    A set of vectors is linearly dependent if there's one of them you can remove that wouldn't help you 'reach' any other vectors in the space
    How does this tie in to a set of equations (vectors?) being linearly dependent if the determinant of the matrix - presumably the matrix consisting of these vectors - is zero?

    (Original post by generalebriety)
    (this obviously places an upper bound on the number of vectors you can have - if you take four vectors that span 3D space, one of them is always superfluous because you can always remove one and make any vector you like out of the other three). A set of vectors is a spanning set if you can make any vector in the space you like by scaling vectors in that set and adding them together (this obviously places a lower bound on the number of vectors you can have - if you take two linearly independent vectors in 3D, there'll always be some vectors you can't make, e.g. the vector defined by their cross product). A set of vectors is a basis for a space if they are linearly independent and span the space.
    The cross product of two vectors gives you a vector which is perpendicular to the original two, is that correct? So you wouldn't be able to reach that vector by combining (adding multiples of) the original 2 vectors in 3D space.

    So how do you know that a (number of) vectors span a space??

    Is there a book somewhere, with numerical examples in, that I could look at? I haven't had anything to think about that's a concrete example that I can work through myself to see if I've got my head around this. Sorry. I'm probably being a bit slow, but I think I understand most of the idea but I don't quite know it as I haven't tried it out for myself. :puppyeyes:
    Offline

    10
    ReputationRep:
    (Original post by grace_)
    What's a trivial solution? One where all the relevant values are zero?
    Yes.

    How does this tie in to a set of equations (vectors?) being linearly dependent if the determinant of the matrix - presumably the matrix consisting of these vectors - is zero?
    You know that if the determinant is nonzero, then the matrix is invertible. Therefore there are unique solutions to \mathbf{Mu} = \mathbf{v}, i.e. you can represent any vector v by a linear combination u of basis vectors (the columns of M).

    So how do you know that a (number of) vectors span a space??
    The minimum number of vectors you need to span a space is the dimension of the space. To prove that a set of vectors span a space... well, if the number of vectors is equal to the dimension, then you can use the matrix determinant. If not, there's something called row reduction you can use to find out which ones a linearly independent and how many.
    Offline

    2
    ReputationRep:
    (Original post by insparato)
     \nabla is a symbol for the vector differential operator Del,

    basically \displaystyle \nabla = i\frac{\partial}{\partial x} + j\frac{\partial}{\partial y} + k \frac{\partial}{\partial z}

    \displaystyle Grad F = \nabla f =  i\frac{\partial f}{\partial x} + j\frac{\partial f}{\partial y} + k \frac{\partial f}{\partial z}
    So if I had  f(x,y,z) = x^2 + 3y^4 - z^5
    then
    \displaystyle \nabla f = 2x i + 12y^3 j - 5z^4k

    but what is this thing, as a physical idea? Is it the gradient of the function f(x, y, z) in some sense?
    Offline

    13
    ReputationRep:
    Yes, if the given function f(x,y,z) represents a scalar field. So if you plop numbers in x,y,z you will get a scalar value. Grad F is the gradient essentially.

    I think its easier understood if you deal with scalar and vector fields in terms of physical things.

    A scalar field is where a scalar value is associated with every point in space.

    So take a room with a radiator, different parts of the room are hotter than others. So the room represents space, different points in the room correspond to a different temperatures (which is the scalar value).

    Now a gradient of a scalar field(Grad F)

    Take the room again, and take a point in the room. The gradient at this point will show you the direction of which the temperature rises quickest(so if you look at Grad F, it is a vector and it always points in the direction where it scalar values rise quickest). The magnitude of the gradient will show you how fast the temperature will rise.

    http://en.wikipedia.org/wiki/Image:Gradient2.svg This should help.

    I think Billy might have my nuts off, i've diverged pretty far from discussion but if you're interested you can PM me, or one of the others(probably better) to find out more.
    Offline

    18
    ReputationRep:
    (Original post by grace_)
    I'm glad someone can understand this! :p:
    Though I think nota might be on his own on this one. I don't understand it... (Though I think as much as anything this is because the discussion is scattered over about 20 posts. I looked at it, thought, "what's h supposed to be again?", scrolled up to the top and couldn't see it in any post and lost the will to carry on).
    Offline

    2
    ReputationRep:
    (Original post by Zhen Lin)
    The minimum number of vectors you need to span a space is the dimension of the space. To prove that a set of vectors span a space... well, if the number of vectors is equal to the dimension, then you can use the matrix determinant. If not, there's something called row reduction you can use to find out which ones a linearly independent and how many.

    So if you need three linearly independent vectors to reach any given point in the space (and exactly and only three vectors) then you have a three-dimensional space?

    I think I remember doing something about row reduction when learning about finding the determinant of a matrix - you can add/subtract any multiple of a row from any other row without changing the value of the determinant? Is that a similar idea to being able to see if one of the vectors you've got is linearly dependent on the others, or not?
    Offline

    10
    ReputationRep:
    Correct - a smallest spanning set is a largest linearly independent set and is a basis, so it has as many vectors as there are dimensions.
    Offline

    13
    ReputationRep:
    (Original post by grace_)
    What's a trivial solution? One where all the relevant values are zero?
    Yes.
    The cross product of two vectors gives you a vector which is perpendicular to the original two, is that correct? So you wouldn't be able to reach that vector by combining (adding multiples of) the original 2 vectors in 3D space.
    In fact, no. Adding \lambda \mathbf{a}+\mu\mathbf{b} must not give \mathbf{c} (where \mathbf{a}\times\mathbf{b}=\math  bf{c}). Remember that a and b are in R^2, whilst c will be in R^3

    Is there a book somewhere, with numerical examples in, that I could look at? I haven't had anything to think about that's a concrete example that I can work through myself to see if I've got my head around this. Sorry. I'm probably being a bit slow, but I think I understand most of the idea but I don't quite know it as I haven't tried it out for myself. :puppyeyes:
    I've got a couple of linear algebra books in pdf that I could send, but be prepared that they're dense and contains a lot theorems and proofs (first year undergrad stuff mainly, but bits should be understandable with good a-level knowledge), and maybe not so many exercises. PM me if you want any
    You're not being slow! You grasp things in this thread in 24hrs that takes me at least a week to understand:p:
    Offline

    2
    ReputationRep:
    (Original post by grace_)
    The cross product of two vectors gives you a vector which is perpendicular to the original two, is that correct? So you wouldn't be able to reach that vector by combining (adding multiples of) the original 2 vectors in 3D space.
    (Original post by nota bene)
    In fact, no. Adding \lambda \mathbf{a}+\mu\mathbf{b} must not give \mathbf{c} (where \mathbf{a}\times\mathbf{b}=\math  bf{c}). Remember that a and b are in R^2, whilst c will be in R^3
    Isn't that what I said? If we had two vectors in R^2 and found their cross product then the vector that we get will be perpendicular to the first two (in  R^3). So you would not be able to get the third vector by adding multiples of the original 2 vectors.
    Offline

    2
    ReputationRep:
    Sorry to bother you all, but we've all been using this word "span" and I have no real idea what it means?? How do you know if a set of vectors spans a space?
    Offline

    2
    ReputationRep:
    (Original post by DFranklin)
    Hey, I didn't start it. Blame nota...

    (Grace isn't exactly a normal GCSE student either...)
    (Original post by nota bene)
    That's right, I'll take the blame!

    This time we littered a uni student's thread, with something relatively relevant to the topic (at least we're still discussing eigenvalues). And as said, Grace isn't the average student in year 11... Billy will you let us all off? :puppyeyes:
    Grace is quite happy and is also grateful to you all for taking the time to explain things - thank you very much for all the help! I might not be completely or even partially clued up on it all yet, but I still appreciate the help and advice.
    Offline

    13
    ReputationRep:
    (Original post by DFranklin)
    Though I think nota might be on his own on this one. I don't understand it... (Though I think as much as anything this is because the discussion is scattered over about 20 posts. I looked at it, thought, "what's h supposed to be again?", scrolled up to the top and couldn't see it in any post and lost the will to carry on).
    h is supposed to be the function and g the constraint. I should probably go find the online lecture notes we're given on this; I was going on my own notes and taking things from my memory. I'd link you to the notes, but unfortunately they are in Swedish...
    Offline

    13
    ReputationRep:
    (Original post by grace_)
    Isn't that what I said? If we had two vectors in R^2 and found their cross product then the vector that we get will be perpendicular to the first two (in  R^3). So you would not be able to get the third vector by adding multiples of the original 2 vectors.
    It very much is exactly what you said;yes; My apologies, I somehow read 'would' instead of "wouldn't".

    edit:
    Sorry to bother you all, but we've all been using this word "span" and I have no real idea what it means?? How do you know if a set of vectors spans a space?
    http://en.wikipedia.org/wiki/Linear_span#Definition (ask if there's something you don't understand )
    Offline

    10
    ReputationRep:
    I apologise if for being pedantic, but:
    1. A 2D vector is not a 3D vector. That is to say, \begin{pmatrix}x \\ y\end{pmatrix} and \begin{pmatrix}x \\ y \\ 0\end{pmatrix} are not the same kinds of vector.
    2. You can't take the cross product of 2D vectors. The cross product is only defined in \mathbb{R}^3.
    Offline

    2
    ReputationRep:
    Um.
    1. So the span of a set of vectors is basically the set of points that can be reached using linear combinations of those vectors?
    2. So a set of vectors spans a space if you can reach any point in the space using a linear combination of those vectors?
    3. If you have the minimum number of vectors in that set, then they form a basis for the space?
    4. The basis is orthogonal if all those vectors are perpendicular to one another.
    5. The basis is orthonormal if all those vectors are perpendicular to one another and have length 1.
    6. "(Three) vectors are linearly independent" means that you cannot make the third vector by adding together (multiples of) the first two.

    Corrections?
    Offline

    13
    ReputationRep:
    (Original post by Zhen Lin)
    I apologise if for being pedantic
    Not pedantic at all, rather correcting all my mistakes Thanks, and I'll leave the vector stuff for you to explain, because to say the least vectors is really not my topic. Analysis/calculus and a bit of matrices, but no vectors thanks...

    edit: Your 6 points above seem all correct Grace.
    • Wiki Support Team
    Offline

    14
    ReputationRep:
    Wiki Support Team
    (Original post by grace_)
    How does this tie in to a set of equations (vectors?) being linearly dependent if the determinant of the matrix - presumably the matrix consisting of these vectors - is zero?
    I don't think anyone's answered your question yet...

    I'll assume throughout that M is 3x3. It's quite easy to show that if a matrix M = (c1 c2 c3) is made of three column vectors, with components:

    M = \begin{pmatrix} x_1 & y_1 & z_1 \\ x_2 & y_2 & z_2 \\ x_3 & y_3 & z_3 \end{pmatrix}

    then adding a multiple of one column to another won't change the determinant. So, for example,

    |M| = |c1 c2 c3| = |(c1-3c2) c2 c3|

    i.e.

    \begin{vmatrix} x_1 & y_1 & z_1 \\ x_2 & y_2 & z_2 \\ x_3 & y_3 & z_3 \end{vmatrix} = \begin{vmatrix} x_1 - 3y_1 & y_1 & z_1 \\ x_2 - 3y_2 & y_2 & z_2 \\ x_3 - 3y_3 & y_3 & z_3 \end{vmatrix}.

    You can also swap columns without affecting the determinant much (i.e. the magnitude of the determinant stays the same, but each time you swap two columns the determinant gains a minus sign).

    It's also obvious that if M has a zero column, the determinant will be zero. So if you can add together linear combinations of the three column vectors to get the zero vector (this is the definition of "linearly dependent"), you can do this column reduction to M and get a zero column, and the determinant is zero. A similar argument can be applied to the rows, but the following is much more interesting and important.

    Let's assume we've done this 'column reduction' with a specific aim in mind: we're gonna add multiples of c_2 and c_3 to the first column to try and get a column of zeros. If we can't do that, we're gonna do the best we can and try and just leave one number in the top left corner.

    (We can always get at least two zeros. Why? Well, consider (x_2, x_3), (y_2, y_3), (z_2, z_3) informally as 2D vectors. There's obviously one superfluous one, so we can always take two of them and make the third out of it, i.e. a linear combination of them will always give the zero vector. Just do this to the bottom six elements of the matrix, and ignore what happens to the top three while you're doing so.)

    Then, after doing that, we're gonna add multiples of (the new) c_1 and c_3 to the second column to get as many zeros as we can. (We can always get at least one zero by the same argument as above - we have (y_3) and (z_3), two one-dimensional column vectors, and it's obvious that you can multiply z_3 by something to get y_3.) The type of matrix we get at the end we call 'upper triangular' - and you can see why, because the lower-left three elements of the matrix (which form a triangle :p:) are zero. Now, count the number of non-zero columns - this is called the column rank of M. If there's at least one non-zero number in each column, its rank is 3; if there's one zero column, its rank is 2; if there are two zero columns, its rank is 1; if there are three zero columns, its rank is 0. (See examples of column reduction in the spoiler below.)

    Spoiler:
    Show
    \begin{pmatrix} 2 & 1 & 0 \\ 1 & 1 & 0 \\ 1 & 0 & 1 \end{pmatrix} \to \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{pmatrix} \to \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}
    In the first transformation, I've subtracted the second column from the first; then in the second transformation, I've subtracted the third column from the first. This leaves me my three zeroes in the bottom left. Upper triangular form. No zero columns: column rank = 3.

    \begin{pmatrix} 2 & 1 & 1 \\ 0 & 1 & -1 \\ 1 & 0 & 1 \end{pmatrix} \to \begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & -1 \\ 0 & 0 & 1 \end{pmatrix} \to \begin{pmatrix} 0 & 1 & 1 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{pmatrix}
    I'm sure you can work out the transformations yourself. One zero column, so column rank = 2.

    \begin{pmatrix} 2 & 1 & 0 \\ 4 & 2 & 0 \\ 8 & 4 & 0 \end{pmatrix}
    I'll leave you to do the column reduction. (Note that the second column is a multiple of the first.) Column rank = 1.

    \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}
    Column rank = 0. For obvious reasons.


    Now, if we do the same thing to the rows (just using row reduction), we also get a row rank. The amazing thing is that row rank is always equal to column rank, and this is called the rank of the matrix. The rank of the matrix is the number of linearly independent vectors (read as columns or rows, it doesn't matter) it contains; the rank is also therefore the dimension of the space that these vectors span. The determinant is therefore non-zero (and hence the matrix is invertible) if and only if the rank of the matrix is the dimension of the space it's working in.

    We can also define the kernel of the matrix M as the set of all vectors v with Mv = 0, and this defines a legitimate subspace of R^3 or whatever space we're working in. (By "subspace", I just mean it defines a nice point or line or plane or other 'nice' space through the origin; rather than there just being a few vectors dotted around R^3 which satisfy Mv = 0.) As this is a subspace, we can talk about its dimension (i.e. the dimension of the space that the vectors in the kernel span), and this is called the nullity. In addition to our stuff above, we also have the rank-nullity theorem operating, which states that the rank of a matrix plus its nullity is the dimension of the space (e.g. 3 here).

    Example:

    Spoiler:
    Show
    M = \begin{pmatrix} 2 & 1 & 1 \\ 0 & 1 & -1 \\ 1 & 0 & 1 \end{pmatrix}.

    \begin{pmatrix} 2 & 1 & 1 \\ 0 & 1 & -1 \\ 1 & 0 & 1 \end{pmatrix} \to \begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & -1 \\ 0 & 0 & 1 \end{pmatrix} \to \begin{pmatrix} 0 & 1 & 1 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{pmatrix} has rank 2. So by the rank-nullity theorem, 2 + nullity = 3, and so nullity = 1, so the kernel has dimension 1 (i.e. is a line). Which line? Well, I said before it had to go through the origin (because M0 = 0...). By looking at the column reduction we did, or solving simultaneous equations, or guessing, you can also work out that (-1, 1, 1) is in the kernel. But then if M(-1, 1, 1) is zero, so is M times any multiple of (-1, 1, 1) (note: including 0 times it, which is just 0). So the kernel is the line in the direction (-1, 1, 1) through 0.

    Take the rank 1 matrix we had earlier in the previous spoiler; nullity = 2. Its kernel contains (0, 0, 1) and (1, -2, 0) (again by guessing, or more sophisticated stuff), because if you multiply the matrix by either of these you get zero. These define a plane.

    Take the rank 0 matrix we had earlier (the zero matrix). Its kernel contains (1, 0, 0), (0, 1, 0), (0, 0, 1) and is therefore a basis for R^3.


    In summary, if it's a 3x3 matrix, then the following statements are all equivalent:
    - the rank is 3
    - the determinant isn't 0
    - the matrix is invertible
    - the three column vectors that form the matrix are linearly independent and spanning / a basis for R^3
    - the three row vectors that form the matrix are linearly independent and spanning / a basis for R^3
    - the kernel is 0-dimensional, and therefore contains only the 0 vector.

    [/geekery]

    Hope I haven't made any mistakes there...

    Edit 10000: Incidentally, if you look on MIT OpenCourseWare (google it; videoed lectures, basically), they have a good (fairly slow-paced) linear algebra course which explains this stuff over about 20 hours, broken up into lots of small lectures. Have a look if you're interested. It goes into quite a bit of detail - a bit more than my vectors and matrices course did last term. Ok, should go to bed now - last day of lectures tomorrow. Yawn...
    • Wiki Support Team
    Offline

    14
    ReputationRep:
    Wiki Support Team
    (Original post by insparato)
    I think Billy might have my nuts off, i've diverged pretty far from discussion but if you're interested you can PM me, or one of the others(probably better) to find out more.
    The thread has gone hugely spammy. This annoys me a lot less than many threads, though, because the OP isn't (say) an A-level student who doesn't understand a word we're saying, and he has received help and answers to his questions. Being a maths student at uni, he might even find a lot of this interesting. I don't have anything against discussions drifting, I just like to make sure we're helpful to the OP and don't go off topic / answer at an inappropriate level.

    That said, I've just realised grace_ is a year 11 student, and so a lot of my above post might be completely inappropriate. Ah well... I think I've tried to give a fairly intuitive approach rather than a stone cold hard theory approach, so hopefully she'll understand. Might make a few additions, though.

    (After all, intuition is what maths is about. Proof is, in the end, not what gets results; it's a formality that mathematicians are particularly proud of, but no mathematician seriously thinks in epsilons and deltas naturally unless they have to...)
 
 
 
Reply
Submit reply
Turn on thread page Beta
Updated: March 17, 2008

University open days

  • University of East Anglia
    UEA Mini Open Day Undergraduate
    Fri, 4 Jan '19
  • University of Lincoln
    Mini Open Day at the Brayford Campus Undergraduate
    Mon, 7 Jan '19
  • Bournemouth University
    Undergraduate Mini Open Day Undergraduate
    Wed, 9 Jan '19
Poll
Did you get less than your required grades and still get into university?
Useful resources

Make your revision easier

Maths

Maths Forum posting guidelines

Not sure where to post? Read the updated guidelines here

Equations

How to use LaTex

Writing equations the easy way

Equations

Best calculators for A level Maths

Tips on which model to get

Student revising

Study habits of A* students

Top tips from students who have already aced their exams

Study Planner

Create your own Study Planner

Never miss a deadline again

Polling station sign

Thinking about a maths degree?

Chat with other maths applicants

Can you help? Study help unanswered threads

Groups associated with this forum:

View associated groups

The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.

Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE

Write a reply...
Reply
Hide
Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.