NotNotBatman
Badges: 20
Rep:
?
#1
Report Thread starter 1 year ago
#1
If we have the planar system given by the ODEs

 \dot{u} = f(u,v)
 \dot{v} = g(u,v)

and we let  \mathbf{x} = \binom{u}{v}

then \dot{\mathbf{x}} = \mathbf{Mx}

obtaining a solution;

\mathbf{Mx}=\lambda \mathbf{x}

but what I don't understand is how the trajectory of the system is determined by the eigenvectors of M.
0
reply
RDKGames
Badges: 20
Rep:
?
#2
Report 1 year ago
#2
(Original post by NotNotBatman)
If we have the planar system given by the ODEs

 \dot{u} = f(u,v)
 \dot{v} = g(u,v)

and we let  \mathbf{x} = \binom{u}{v}

then \dot{\mathbf{x}} = \mathbf{Mx}

obtaining a solution;

\mathbf{Mx}=\lambda \mathbf{x}

but what I don't understand is how the trajectory of the system is determined by the eigenvectors of M.
f,g are linear in u,v, I'm assuming by what you've written?


We have that:

\begin{cases} \dot{u} = au+bv \\ \dot{v} = cu + dv

where the first eq is (*) and second is (**)

Take the first eq, and differentiate with respect to... time t, I suppose? Whatever the dot differentiation represents in your context.

Then we get it as \ddot{u} = a \dot{u} + b\dot{v}

We sub in \dot{v} from (**) and obtain \ddot{u} = a \dot{u} + b \left( cu+dv \right) (***)

From (*) we also get that v = \dfrac{1}{b}\dot{u} - \dfrac{a}{b}u and subbing it into (***) we get:

\ddot{u} - (a+d) \dot{u} + (ad-bc)u = 0

Clearly, this is a second order ODE which has solutions

u = C_1e^{\lambda_1 t} + C_2 e^{\lambda_2 t}

with \lambda_1, \lambda_2 satisfying its characteristic equation:

\lambda^2 - (a+d)\lambda + (ad-bc) = 0 \iff (\lambda-a)(\lambda - d)-bc = 0

which is precisely the equation for the eigenvalues of the matrix \mathbf{M} = \begin{pmatrix} a & b \\ c & d \end{pmatrix}


So going from there, you'd find that the eigenvector of the matrix \mathbf{M} for \lambda_1 coincides with the trajectory u.
Indeed, we get the same when we do the process for v, and its particle path corresponds to the eigenvector with \lambda_2.
1
reply
DFranklin
Badges: 18
Rep:
?
#3
Report 1 year ago
#3
(Original post by RDKGames)
~snip~.
I'm a little rusty on this, but this seems to give a good method for *calculating* the solution, while largely obscuring the whole reason we care about eigenvectors. (But as I say, I'm rusty, so feel free to correct me).

(Original post by NotNotBatman)
..
If you know {\bf Mv} = \lambda {\bf v} (and {\bf Mx} = \dot{\bf x}) then for x of the form v(t){\bf v} we have v'(t) = \lambda v(t) and so we find a solution of the form {\bf x} = e^{\lambda t} {\bf v}. The point here is that for vectors in this direction, the matrix effectively behaves as a constant multiplier, making life much much easier.

If you have a spanning set of eigenvectors \{\bf {v_1, v_2, ..., v_n}\} then we can represent general x in the form

{\bf x(t)} = \sum_i a_i(t) {\bf v_i}.

We can then apply the same argument to find:

{\bf x}(t) = \sum_i a_i(0) e^{\lambda_i t} {\bf v_i}

Critically, the decomposition of x into eigenvectors at t=0 determines the future evolution of the system.

Edit: FWIW, when I actually need to *solve* a problem of this type, I'll generally do what RDK posted. It's less "elegant", but in my experience it's quicker than finding the eigenvectors and decomposing the initial x into a sum of eigenvectors.
Last edited by DFranklin; 1 year ago
0
reply
NotNotBatman
Badges: 20
Rep:
?
#4
Report Thread starter 1 year ago
#4
(Original post by RDKGames)
f,g are linear in u,v, I'm assuming by what you've written?


We have that:

\begin{cases} \dot{u} = au+bv \\ \dot{v} = cu + dv

where the first eq is (*) and second is (**)

Take the first eq, and differentiate with respect to... time t, I suppose? Whatever the dot differentiation represents in your context.

Then we get it as \ddot{u} = a \dot{u} + b\dot{v}

We sub in \dot{v} from (**) and obtain \ddot{u} = a \dot{u} + b \left( cu+dv \right) (***)

From (*) we also get that v = \dfrac{1}{b}\dot{u} - \dfrac{a}{b}u and subbing it into (***) we get:

\ddot{u} - (a+d) \dot{u} + (ad-bc)u = 0

Clearly, this is a second order ODE which has solutions

u = C_1e^{\lambda_1 t} + C_2 e^{\lambda_2 t}

with \lambda_1, \lambda_2 satisfying its characteristic equation:

\lambda^2 - (a+d)\lambda + (ad-bc) = 0 \iff (\lambda-a)(\lambda - d)-bc = 0

which is precisely the equation for the eigenvalues of the matrix \mathbf{M} = \begin{pmatrix} a & b \\ c & d \end{pmatrix}


So going from there, you'd find that the eigenvector of the matrix \mathbf{M} for \lambda_1 coincides with the trajectory u.
Indeed, we get the same when we do the process for v, and its particle path corresponds to the eigenvector with \lambda_2.
(Original post by DFranklin)
I'm a little rusty on this, but this seems to give a good method for *calculating* the solution, while largely obscuring the whole reason we care about eigenvectors. (But as I say, I'm rusty, so feel free to correct me).


If you know {\bf Mv} = \lambda {\bf v} (and {\bf Mx} = \dot{\bf x}) then for x of the form v(t){\bf v} we have v'(t) = \lambda v(t) and so we find a solution of the form {\bf x} = e^{\lambda t} {\bf v}. The point here is that for vectors in this direction, the matrix effectively behaves as a constant multiplier, making life much much easier.

If you have a spanning set of eigenvectors \{\bf {v_1, v_2, ..., v_n}\} then we can represent general x in the form

{\bf x(t)} = \sum_i a_i(t) {\bf v_i}.

We can then apply the same argument to find:

{\bf x}(t) = \sum_i a_i(0) e^{\lambda_i t} {\bf v_i}

Critically, the decomposition of x into eigenvectors at t=0 determines the future evolution of the system.

Edit: FWIW, when I actually need to *solve* a problem of this type, I'll generally do what RDK posted. It's less "elegant", but in my experience it's quicker than finding the eigenvectors and decomposing the initial x into a sum of eigenvectors.
Thank you both for the explanations. Can a similar explanation be used for the Jacobian matrix in the linearisation of nonlinear systems ?
0
reply
DFranklin
Badges: 18
Rep:
?
#5
Report 1 year ago
#5
(Original post by NotNotBatman)
Thank you both for the explanations. Can a similar explanation be used for the Jacobian matrix in the linearisation of nonlinear systems ?
I'm not familiar with this, but I would expect so. Post an example if you want me to give a more 'reasoned' comment.
0
reply
Gregorius
Badges: 14
Rep:
?
#6
Report 1 year ago
#6
(Original post by NotNotBatman)
Thank you both for the explanations. Can a similar explanation be used for the Jacobian matrix in the linearisation of nonlinear systems ?
Yes. However, the question of whether the linearization of the dynamical system is topologically equivalent (i.e. behaves the same way) to the dynamical system itself is the subject of the huge field of "structural stability".
0
reply
X

Quick Reply

Attached files
Write a reply...
Reply
new posts
Back
to top
Latest
My Feed

See more of what you like on
The Student Room

You can personalise what you see on TSR. Tell us a little about yourself to get started.

Personalise

University open days

  • Bournemouth University
    Undergraduate Open Day Undergraduate
    Wed, 19 Feb '20
  • Buckinghamshire New University
    Postgraduate and professional courses Postgraduate
    Wed, 19 Feb '20
  • University of Warwick
    Warwick Business School Postgraduate
    Thu, 20 Feb '20

Has your university offer been reduced?

Yes (51)
30.54%
No (87)
52.1%
Don't know (29)
17.37%

Watched Threads

View All