The Student Room Group

Matrix iterations

Given a matrix with two eigenvalues, iterating it on an initial point on one of its eigenlines just produces a series of points also on that same eigenline, right?

If you iterate it on a point not on one of its eigenlines, you get a series of points tending towards one of the eigenlines.

Right so far?

But in the second case, how do we know which eigenline the points will tend towards?
Reply 1
Urm... I'm not so sure. For example if you repeatedly apply the matrix (0110)\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} to the vector (1,0)(1,0) then you end up in an endless loop of (1,0) -> (0,1) -> (-1,0) -> (0,-1) -> (1,0) and so on.
Reply 2
In general, the matrix will have a basis of eigenvectors, and one will have an eigenvalue with modulus strictly bigger than the others (this occurs with probability 1) (*). Call this eigenvector v.
If you then choose a random initial vector and represent it using the eigenvector basis, then with probability 1, the coefficient of v will be non-zero (**).
Then repeatedly multiplying by the matrix causes that coefficient to grow to dominate all others, and so the vector tends towards the eigenline corresponding to the biggest eigenvector.

If (*) or (**) turn out not to be true (because you are unlucky, or because you have an especially fiendish examiner (in which case I guess you're still unlucky)), then all bets are off. (For any specific case, it's unlikely to be that hard to work out what happens, but a general solution/description is going to be tricky).

I
Original post by nuodai
Urm... I'm not so sure. For example if you repeatedly apply the matrix (0110)\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} to the vector (1,0)(1,0) then you end up in an endless loop of (1,0) -> (0,1) -> (-1,0) -> (0,-1) -> (1,0) and so on.

But that matrix has no real eigenvalues. I am talking about a matrix with two.


Original post by DFranklin
In general, the matrix will have a basis of eigenvectors, and one will have an eigenvalue with modulus strictly bigger than the others (this occurs with probability 1) (*). Call this eigenvector v.
If you then choose a random initial vector and represent it using the eigenvector basis, then with probability 1, the coefficient of v will be non-zero (**).
Then repeatedly multiplying by the matrix causes that coefficient to grow to dominate all others, and so the vector tends towards the eigenline corresponding to the biggest eigenvector.

If (*) or (**) turn out not to be true (because you are unlucky, or because you have an especially fiendish examiner (in which case I guess you're still unlucky)), then all bets are off. (For any specific case, it's unlikely to be that hard to work out what happens, but a general solution/description is going to be tricky).

I


Right, I see. Thanks. So it basically will tend towards the eigenline corresponding to the eigenvalue with the greatest modulus.
(tried to rep you but it won't let me)
(edited 12 years ago)
Reply 4
Original post by Plato's Trousers
But that matrix has no real eigenvalues. I am talking about a matrix with two.




Right, I see. Thanks. So it basically will tend towards the eigenline corresponding to the eigenvalue with the greatest modulus.
(tried to rep you but it won't let me)


There are also similar iterative methods for finding the eignevalue of smallest magnitude or the eigenvalue nearest to a particular value.
Original post by hmm_what?
There are also similar iterative methods for finding the eignevalue of smallest magnitude or the eigenvalue nearest to a particular value.


hmm...hadn't really thought of using iteration to actually find the eigenvalue... I usually just work them out.
Reply 6
Original post by Plato's Trousers
hmm...hadn't really thought of using iteration to actually find the eigenvalue... I usually just work them out.


What about the eigenvalues of a 20x20 matrix? :wink:

Iterative processes definately have their place although we have computers these days which takes a lot of the pain out of it :biggrin:
(edited 12 years ago)
Reply 7
Original post by Plato's Trousers
But that matrix has no real eigenvalues. I am talking about a matrix with two.


Ah, sorry, I didn't know you were working over R\mathbb{R}. Anyway it seems DFranklin has answered your question :smile:
Original post by hmm_what?
What about the eigenvalues of a 20x20 matrix? :wink:

Iterative processes definately have their place although we have computers these days which takes a lot of the pain out of it :biggrin:


ok, fair point... I wouldn't fancy solving the characteristic equation of a 20x20 matrix!

:eek:
Original post by nuodai
Ah, sorry, I didn't know you were working over R\mathbb{R}. Anyway it seems DFranklin has answered your question :smile:


yes, I should have been more precise. As I am still a child (mathematically) I only tend to paddle in the safe, shallow waters of R\mathbb{R} :wink:
Reply 10
Original post by Plato's Trousers
yes, I should have been more precise. As I am still a child (mathematically) I only tend to paddle in the safe, shallow waters of R\mathbb{R} :wink:

better than drowning in the deep end :tongue:. I just had to say that.
Reply 11
Original post by Plato's Trousers
yes, I should have been more precise. As I am still a child (mathematically) I only tend to paddle in the safe, shallow waters of R\mathbb{R} :wink:


Funny you should say that -- many things involving eigen{anything} and matrices are made a lot more simple if you work over C\mathbb{C}!
Reply 12
Original post by nuodai
Funny you should say that -- many things involving eigen{anything} and matrices are made a lot more simple if you work over C\mathbb{C}!

I know I perhaps won't understand, but care to explain for someone curious?
Reply 13
Original post by anshul95
I know I perhaps won't understand, but care to explain for someone curious?


Well for example, there are lots of matrices that you can't diagonalise over R\mathbb{R} (e.g. almost all rotation matrices) which you can diagonalise over C\mathbb{C}, and working over C\mathbb{C} we get nice things like Jordan normal form. Basically, because much of the theory of matrices relates to the theory of polynomials, things become easier to do if you work in an algebraically closed field (which R\mathbb{R} is not).
Original post by nuodai
Funny you should say that -- many things involving eigen{anything} and matrices are made a lot more simple if you work over C\mathbb{C}!


really? Who'd have thought it. I shall look forward to that, then.
Reply 15
Original post by nuodai
Well for example, there are lots of matrices that you can't diagonalise over R\mathbb{R} (e.g. almost all rotation matrices) which you can diagonalise over C\mathbb{C}, and working over C\mathbb{C} we get nice things like Jordan normal form. Basically, because much of the theory of matrices relates to the theory of polynomials, things become easier to do if you work in an algebraically closed field (which R\mathbb{R} is not).

interesting because in my FP3 module we have to diagonalise symmetric matrices and I always wondered if you could diagonalise other matrices and what the use of that was.
Original post by anshul95
interesting because in my FP3 module we have to diagonalise symmetric matrices and I always wondered if you could diagonalise other matrices and what the use of that was.


as for the uses, one is to be able to evaluate high powers of matrices which is very laborious otherwise. e.g.

A9=PD9P1\textbf{A}^9=\textbf{PD}^9 \textbf{P}^{-1}

since finding a power of the diagonal matrix, D9\textbf{D}^9 is trivial.

But I am sure there are lots of other uses.

Quick Reply

Latest