The Student Room Group

Linear Algebra --- Dimension of Eigenspace

Suppose A is an n by n matrix such that A^2 = A. By considering the eigenvalues of A, prove that either (i) detA = 1 and TrA = n, or (ii) detA = 0 and TrA is an integer less than n.

(I have done this part of the question. I am stuck on this next bit)

If Ax=y, what is Ay? What is the dimension of the space of nonvanishing vectors y for the two cases mentioned above?

Spoiler

(edited 11 years ago)
Reply 1
Original post by TBendy
Suppose A is an n by n matrix such that A^2 = A. By considering the eigenvalues of A, prove that either (i) detA = 1 and TrA = n, or (ii) detA = 0 and TrA is an integer less than n.

(I have done this part of the question. I am stuck on this next bit)

If Ax=y, what is Ay? What is the dimension of the space of nonvanishing vectors y for the two cases mentioned above?

Spoiler



You can work out the algebraic multiplicity of the eigenvalues 1 and 0 from the trace of A, and then you can use the form of the minimal polynomial of A to work out the corresponding geometric multiplicities. Let's say the algebraic multiplicity of the eigenvalue 1 is k - do you know what the minimal polynomial of A would look like if the geometric multiplicity of the eigenvalue 1 wasn't also k?
Reply 2
Original post by Mark13
You can work out the algebraic multiplicity of the eigenvalues 1 and 0 from the trace of A, and then you can use the form of the minimal polynomial of A to work out the corresponding geometric multiplicities. Let's say the algebraic multiplicity of the eigenvalue 1 is k - do you know what the minimal polynomial of A would look like if the geometric multiplicity of the eigenvalue 1 wasn't also k?


No I don't --- I have not met the concept of 'minimal polynomial'! Could you please explain to me what it is and how it applies to this question?

Thanks for your help.
Reply 3
Original post by TBendy
No I don't --- I have not met the concept of 'minimal polynomial'! Could you please explain to me what it is and how it applies to this question?

Thanks for your help.


No worries.

If you haven't met the idea of a minimal polynomial, you can do the question in other ways - it's just the first way I thought to do it.

First off, are you familiar with the idea of a generalised eigenspace?
Reply 4
Original post by Mark13
No worries.

If you haven't met the idea of a minimal polynomial, you can do the question in other ways - it's just the first way I thought to do it.

First off, are you familiar with the idea of a generalised eigenspace?


I am aware of the definition, but not familiar.. This question may assume knowledge I do not have, it was not made by the same people who decided on my course's syllabus --- so it may be that the concepts of minimal polynomials / generalised eigenspaces are necessary in order to tackle this question. However, my limited knowledge on the subject of linear algebra leads me to ask whether you could find the most elementary way of solving this problem. If this most elementary way does require such concepts as the above then I am more than willing to learn about them.

Thanks
Reply 5
Original post by TBendy
I am aware of the definition, but not familiar.. This question may assume knowledge I do not have, it was not made by the same people who decided on my course's syllabus --- so it may be that the concepts of minimal polynomials / generalised eigenspaces are necessary in order to tackle this question. However, my limited knowledge on the subject of linear algebra leads me to ask whether you could find the most elementary way of solving this problem. If this most elementary way does require such concepts as the above then I am more than willing to learn about them.

Thanks


The way I'd approach the question is:

You've already identified that the image of A consists of eigenvectors of eigenvalue 1.
You can show that the image of A must consist of all eigenvectors of eigenvalue 1.
You can find the dimension of the generalised eigenspace corresponding to 1 in terms of the trace of A.
You can show that the eigenspace corresponding to 1 is actually the generalised eigenspace corresponding to 1 i.e. if (AI)r(x)=0(A-I)^r(x)=0 for some positive integer r then (AI)(x)=0(A-I)(x)=0, and from all the above steps, you'll get the dimension of the image of A.

Let me know if any of that's unclear!
Reply 6
Original post by Mark13
The way I'd approach the question is:

You've already identified that the image of A consists of eigenvectors of eigenvalue 1.
You can show that the image of A must consist of all eigenvectors of eigenvalue 1.
You can find the dimension of the generalised eigenspace corresponding to 1 in terms of the trace of A.
You can show that the eigenspace corresponding to 1 is actually the generalised eigenspace corresponding to 1 i.e. if (AI)r(x)=0(A-I)^r(x)=0 for some positive integer r then (AI)(x)=0(A-I)(x)=0, and from all the above steps, you'll get the dimension of the image of A.

Let me know if any of that's unclear!


OK, so how do you find the dimension of the generalised 1-eigenspace using the trace?

Cheers
Reply 7
Original post by TBendy
OK, so how do you find the dimension of the generalised 1-eigenspace using the trace?

Cheers


The dimension of the generalised 1-eigenspace is the same as the multiplicity of 1 as a root of the characteristic equation (I tend to think about these kind of things with Jordan Normal form in mind, but you might not have heard of that, so I'm not sure what the best way to prove this is from your standpoint if you haven't seen this before).

So the problem is reduced to finding the multiplicity of 1 as a root of the characteristic equation.

Since A^2=A, A can only have 2 distinct eigenvalues - can you see what they must be? Based on this, you can work out from the trace of A what the multiplicity of the e'val 1 is, since the trace is the sum of the eigenvalues.
Reply 8
Original post by Mark13
The dimension of the generalised 1-eigenspace is the same as the multiplicity of 1 as a root of the characteristic equation (I tend to think about these kind of things with Jordan Normal form in mind, but you might not have heard of that, so I'm not sure what the best way to prove this is from your standpoint if you haven't seen this before).

So the problem is reduced to finding the multiplicity of 1 as a root of the characteristic equation.

Since A^2=A, A can only have 2 distinct eigenvalues - can you see what they must be? Based on this, you can work out from the trace of A what the multiplicity of the e'val 1 is, since the trace is the sum of the eigenvalues.


OK, so the algebraic multiplicity of the eigenvalue 1 is the trace of A, and so we conclude that the dimension of the generalised 1-eigenspace is also equal to the TrA --- using a result which I am not familiar with, but for now let me accept this as true.

Then we know that if a vector is in the generalised 1-eigenspace it is a solution of A^n x = x (for some positive integer n), but since A^n = n for all positive integer n then such a vector is in the 1-eigenspace, and hence the image. So the dimension of the image is TrA... And no appeal to the determinant is required.

Is this right? If so, can I ask how we conclude that the dimension of the generalised 1-eigenspace is equal to the algebraic multiplicity of the eigenvalue 1?

Thank you.
Reply 9
Original post by TBendy

Then we know that if a vector is in the generalised 1-eigenspace it is a solution of A^n x = x


That's not quite right - a vector x is in the generalised eigenspace of an eigenvalue lambda if (AλI)r(x)=0(A-\lambda I)^r(x)=0 for some positive integer r. You're on the right lines with the rest of it though.

Is this right? If so, can I ask how we conclude that the dimension of the generalised 1-eigenspace is equal to the algebraic multiplicity of the eigenvalue 1?


It comes from a more general result about the decomposition of a vector space into generalised eigenspaces of a linear map - it's been a couple of years since I've seen the proof so I can't remember it off-hand, but I would think that most first courses in linear algebra would address this kind of thing, so it might be worth looking ahead in the notes you're working from.
(edited 11 years ago)
Reply 10
Original post by Mark13
That's not quite right - a vector x is in the generalised eigenspace of an eigenvalue lambda if (AλI)r(x)=0(A-\lambda I)^r(x)=0 for some positive integer [\lambda]. You're on the right lines with the rest of it though.



It comes from a more general result about the decomposition of a vector space into generalised eigenspaces of a linear map - it's been a couple of years since I've seen the proof so I can't remember it off-hand, but I would think that most first courses in linear algebra would address this kind of thing, so it might be worth looking ahead in the notes you're working from.


Ah OK, thank you. My final question then is how we conclude that the 1-eigenspace is the same as the 1-generalised eigenspace?
Reply 11
Original post by TBendy
Ah OK, thank you. My final question then is how we conclude that the 1-eigenspace is the same as the 1-generalised eigenspace?


Since A^2=A, you know that 0=A^2-A=A(A-I). So if x is in the generalised 1-eigenspace, then (A-I)x is in the 0-eigenspace, and also the generalised 1-eigenspace. Since its in the generalised 1-eigenspace, we have

(A-I)^r (A-I)x = 0

for some integer r, but since A (A-I)x=0, this simplifies to say

(-I)^r (A-I)x = 0

so (A-I)x=0, so x is in the 1-eigenspace. Therefore the generalised 1-eigenspace is the same as the 1-eigenspace.
Reply 12
Original post by Mark13
Since A^2=A, you know that 0=A^2-A=A(A-I). So if x is in the generalised 1-eigenspace, then (A-I)x is in the 0-eigenspace, and also the generalised 1-eigenspace. Since its in the generalised 1-eigenspace, we have

(A-I)^r (A-I)x = 0

for some integer r, but since A (A-I)x=0, this simplifies to say

(-I)^r (A-I)x = 0

so (A-I)x=0, so x is in the 1-eigenspace. Therefore the generalised 1-eigenspace is the same as the 1-eigenspace.


Ah, OK! Thank you very much! So this then hinges on the result which I'm not familiar with, that the algebraic multiplicity of an eigenvalue is the dimension of the generalised eigenspace corresponding to that eigenvalue, but with that crucial result I understand the solution.

Cheers again :smile:

Quick Reply

Latest