The Student Room Group

Eigenvalues, eigenfunctions, eigenvectors etc.

How do I find the eigenvalues of an operator (in general)? What does "eigen..."? even mean?

This is a pretty general question I guess but the context in which I'm asking it is within the mathematical framework of quantum wave mechanics. Basically the only time I've ever met eigenvalues, eigenvectors etc. is in the context of matrices but now the same words are being thrown around in my QM module with reference to "operators".

For example, what does it mean to talk about the eigenvalues or eigenfunctions or eigen-whatevers of the position operator?

Thanks - I'll be grateful even for a link to a website that explains the topic :smile:
Reply 1
Linear operators and matrices are very much related. Given two finite-dimensional vector spaces U,VU,V over a field K\mathbb{K}, and E={eii{1,...,n}}E=\{e_i | i \in \{1,...,n\}\}, F={fii{1,...,n}}F=\{f_i | i \in \{1,...,n\}\} bases of UU and VV respectively, for any linear T:UVT: U \to V, let cic_i be T(ei)T(e_i) expressed as a vector in terms of FF for each ii. Then the matrix AA with the cic_i as its columns is the matrix of TT, and they behave in a very similar way: that is, if uUu \in U, and uu^* is uu expressed as a vector in terms of the eie_i, then T(u)T(u), expressed in terms of the fif_i is exactly AuAu^*. (Similarly, you can start with the matrix AA and, given the bases, get TT back, by simply taking the columns of AA as the images of the eie_i).

Once you've got this, the eigenvalues and eigenvectors of TT are exactly the eigenvectors and eigenvalues of AA. Eigenfunctions are a special case of eigenvectors, where UU is a function space.

You can also define eigenvalues and eigenvectors of a linear operator directly, in the same way as you do with matrices:

With T,UT,U as above, with V=UV=U, the eigenvectors of TT are the non-zero vectors uUu \in U such that T(u)=λuT(u) = \lambda u for some λK\lambda \in \mathbb{K}, and the eigenvalues of TT are the λK\lambda \in \mathbb{K} such that there is some uUu \in U such that T(u)=λuT(u)=\lambda u.
(edited 9 years ago)
Reply 2
So, basically the eigenvectors of a linear map are the vectors which get mapped to scalar multiples of themselves. The eigenvalues are the scalar multiples, which correspond to each eigenvector. Every linear map has a corresponding matrix, M, where T(v)=MvT(\mathbf{v}) = M\mathbf{v}. So, for an eigenvector you have that:

T(v)=λv[br]Mv=λvT(\mathbf{v}) = \lambda \mathbf{v}[br]\therefore M\mathbf{v} = \lambda \mathbf{v}

You then take v=(x,y)\mathbf{v} = (x,y), multiply out the matrix and try to find solutions to the simultaneous equations.
Original post by Erebusaur
So, basically the eigenvectors of a linear map are the vectors which get mapped to scalar multiples of themselves. The eigenvalues are the scalar multiples, which correspond to each eigenvector. Every linear map has a corresponding matrix, M, where T(v)=MvT(\mathbf{v}) = M\mathbf{v}. So, for an eigenvector you have that:

T(v)=λv[br]Mv=λvT(\mathbf{v}) = \lambda \mathbf{v}[br]\therefore M\mathbf{v} = \lambda \mathbf{v}

You then take v=(x,y)\mathbf{v} = (x,y), multiply out the matrix and try to find solutions to the simultaneous equations.


How does one find said matrix? Should it be trivial..?
Original post by BlueSam3
Linear operators and matrices are very much related. Given two finite-dimensional vector spaces U,VU,V over a field K\mathbb{K}, and E={eii{1,...,n}}E=\{e_i | i \in \{1,...,n\}\}, F={fii{1,...,n}}F=\{f_i | i \in \{1,...,n\}\} bases of UU and VV respectively, for any linear T[s]u[/s]VT[s]u[/s]\to V, let cic_i be T(ei)T(e_i) expressed as a vector in terms of FF for each ii. Then the matrix AA with the cic_i as its columns is the matrix of TT, and they behave in a very similar way: that is, if uUu \in U, and uu^* is uu expressed as a vector in terms of the eie_i, then T(u)T(u), expressed in terms of the fif_i is exactly AuAu^*. (Similarly, you can start with the matrix AA and, given the bases, get TT back, by simply taking the columns of AA as the images of the eie_i).

Once you've got this, the eigenvalues and eigenvectors of TT are exactly the eigenvectors and eigenvalues of AA. Eigenfunctions are a special case of eigenvectors, where UU is a function space.

You can also define eigenvalues and eigenvectors of a linear operator directly, in the same way as you do with matrices:

With T,UT,U as above, with V=UV=U, the eigenvectors of TT are the non-zero vectors uUu \in U such that T(u)=λuT(u) = \lambda u for some λK\lambda \in \mathbb{K}, and the eigenvalues of TT are the λK\lambda \in \mathbb{K} such that there is some uUu \in U such that T(u)=λuT(u)=\lambda u.


This looks like it will be really helpful but I think something has gone wrong with your LaTeX code :frown:
Original post by Implication
This looks like it will be really helpful but I think something has gone wrong with your LaTeX code :frown:


Fixed. Damn auto-smiley.
Woah. Blue Sam I can't deny you know exactly what you're on about but that was some major maths-speak there!

OP, I've just done my quantum physics module so I'll tell you what I know and it seems to be enough, at least as far as I can tell. (Apologies in advance to the hardcore mathematicians here - I'm probably about to use a vast amount of incorrect terminology, but hopefully my physics student counterpart will understand!)

First of all, 'eigen' means 'own', 'particular', 'individual'. In some languages eigenvectors are translated to 'autovectors'.
So we're talking about vectors that have some form of individuality to them. But what does this mean?

The basic eigenfunction equation looks like this: AX=λX

X is an eigenvector. λ is an eigenvalue.
So X may be of the form [A,B,C], while λ is a singular numerical value.

When you take the product AX, the result is λX.

There are often several X's, and for each product AX, you get a different λX.
For every eigenvector there is a corresponding eigenvalue.

The reason they're special is because the equation AX=λX is only true for very particular X and λ.

As with most mathematical discoveries, eigenvectors will have just been a 'cool thing', which later then turned out to be highly useful for describing the world around us. In our case, we now use them for quantum physics!

So we have the Schrodinger equation:



If you separate out the Ψ(x) on the left hand side of the equation, you can write it in the form of

HΨ(x)=EΨ(x)

with 'H' being:



Can you see how it looks like an eigenvalue equation?:

HΨ(x)=EΨ(x) ------->>> AX=λX

This is why the Schrodinger equation is an eigenvalue equation, and is where the term 'energy eigenvalues' comes from. A quantum system can only exist for certain energy eigenvalues because 'E' must be such that
HΨ(x)=EΨ(x)

In the case of most quantum physics problems, you will know
Ψ(x), so you simply plug it in to the equation and solve for the final unknown, 'E'. You'll find that there are often multiple possible values of E. By finding these 'energy eigenvalues', you know what possible energy states the quantum system can exist in.

Essentially, for every given wave function
Ψ(x), you can calculate the discreet energy levels 'E' that it can exist under. These sorts of calculations are what paved the way for the solution of things like the ultraviolet catastrophe, which I presume you've learned about already.

Sorry if any of the quantum physics stuff was patronisingly oversimplified for you. I just wanted to show how the schrodinger equation itself is an eigenvector equation!

Hopefully this helps in some small way.
Original post by Implication
How does one find said matrix? Should it be trivial..?
If you are talking about operators on vector spaces, you just form the matrix from the images of the basis vectors under the operator.

If you are talking about more general linear operators (e.g. the derivative operator f(x)->f'(x)), then your matrix would have to be infinite (not necessarily even countably infinite), so it isn't really true that you can find a matrix corresponding to the operator.

Quick Reply

Latest