The Student Room Group

Muon's Daily Revision Summary!

Scroll to see replies

Reply 80
Original post by EnglishMuon

Similar techniques can be used to show the other common properties too.


This could make for a good STEP III question if adapted correctly, something like that question in III 1998 about the beta function in disguise!



If we have the vector equation λa[b][/b]+μb=0 \lambda \mathbf{a}[b] [/b]+ \mu \mathbf{b} =0 (where lambda, mu are scalars), we can only say this equation implies a=0,b=0 \mathbf{a}=0 , \mathbf{b} =0 iff a and b are linearly dependent (i.e. ab=0 \mathbf{a} \cdot \mathbf{b} = 0 ) as otherwise a could be expressed in terms of b and vice versa. This can be used in proof by contradictions- e.g. showing some expression=0 iff a and b satisfy these properties.


Did you mean independent for the bit I bolded? Do you have an example of this, I think I kind of get the gist but I'm not entirely sure. :redface:
Original post by Zacken
This could make for a good STEP III question if adapted correctly, something like that question in III 1998 about the beta function in disguise!





Did you mean independent for the bit I bolded? Do you have an example of this, I think I kind of get the gist but I'm not entirely sure. :redface:


lol yep, sorry about the typo, I was in a rush for food :wink:. Ill add an example after dinner!
Day 16 Summary
One important thing for all exams is coping with stress, during revision and the exam itself. I think I need to remember the situation is not as bad as it may first seem- for example today, I went to pieces during a STEP III paper. It all started when I went blank and missed the extremely basic idea of rewriting complex conjugates:
Let α=e2πi7 \alpha= e^{ \frac{2 \pi i}{7}} . Then the roots to the quadratic equation z2+z+2=0 z^{2}+z+2=0 are α+α2+α4 \alpha + \alpha^{2} +\alpha^{4} and the complex conjugate of this root.
But suppose we wanted to write this other conjugate root in terms of alphas, then we do the obvious and reflect in the real axis via
Unparseable latex formula:

( \alpha + \alpha^{2} +\alpha^{4} )^{*} = \alpha^{*} + \alpha^{2} ^{*} +\alpha^{4} ^{*}

. E.g. (α)=e2πi7+2nπ=e5πi7=α5 ( \alpha)^{*}= e^{ \frac{-2 \pi i}{7}+2n \pi} = e^{ \frac{5 \pi i}{7}} = \alpha^{5} . So we did something elementary to something obvious and ended up with something easy, and yet somehow it brought me to the verge of tears.
Sure this could well be the worst and wimpiest story of all time, but I thought it was a good reminder for myself in future. I think I still scraped an S on that paper so it goes to show I should stop being a moany old floppy lemon and just get on with it!
Day 17 Summary
A 'not specifically required' but extremely common and pops up everywhere technique for step is knowledge of the triangle inequality, a+ba+b |a+b| \leq |a|+|b| for any real a,b.
The origin of it comes from looking at two vectors and their resultant combined vector- this can be seen by drawing a parallelogram and noting the diagonal (mod (a+b) ) is always less than or equal to the sum of each side length (and the equality case only occurs when the parallelogram has 0 area i.e. the vectors are scalar multiples of eachother).

One thing I didnt think about for a while though is a concrete proof of this, so here is my attempt(s):

By definition, x=max(x,x) |x| = max(x,-x)
hence
a2+b2+2aba2+b2+2ab[br](a+b)2a+b2[br]a+ba+b a^{2}+b^{2}+2|a||b| \geq a^{2}+b^{2} +2ab [br]\Rightarrow (|a|+|b|)^{2} \geq |a+b|^{2}[br]\Rightarrow |a+b| \leq |a|+|b|
Day 18 Summary
A lovely proof that reminds me of the proper structure proofs should have is of L'Hopitals (insert circumflex where appropriate) rule. This is one of the first (and few) formal limit based questions I managed to (mostly) solve myself so hopefully it makes sense!

Mean Value Theorem

Suppose that f and g are continuous functions on the interval [a,b] [a,b] , differentiable on (a,b) (a,b) and g(x)0x(a,b) g'(x) \not= 0 \forall x \in (a,b)
Let's choose a constant h so that F=f+hg F=f+hg satisfies F(a)=F(b) F(a)=F(b) (which will then allow us to apply Rolle's theorem).
So
f(a)+hg(a)=f(b)+hg(b)h=f(b)f(a)g(b)g(a) f(a)+hg(a)=f(b)+hg(b) \Rightarrow h= - \dfrac{f(b)-f(a)}{g(b)-g(a)}
Then by Rolle's theorem, ξ(a,b) \exists \xi \in (a,b) such that 0=F(ξ)=f(ξ)+hg(ξ)h=f(ξ)g(ξ) 0=F'( \xi)= f'( \xi)+ hg'( \xi) \Rightarrow h= - \dfrac{f'( \xi)}{ g'( \xi)} .

i.e. f(ξ)g(ξ)=f(b)f(a)g(b)g(a) \dfrac{f'( \xi)}{ g'( \xi)} = \dfrac{f(b)-f(a)}{g(b)-g(a)} for some xi.

L'Hopitals Rule

Now suppose f(a)=g(a)=0 f(a)=g(a)=0 .
Then
limxa+f(x)g(x)=limxa+f(x)0g(x)0=limxa+f(x)f(a)g(x)g(a) \displaystyle\lim_{x\to a+} \dfrac{f(x)}{g(x)} = \displaystyle\lim_{x\to a+} \dfrac{f(x)-0}{g(x)-0} = \displaystyle\lim_{x\to a+} \dfrac{f(x)-f(a)}{g(x)-g(a)}
But ξa \xi \rightarrow a as ba b \rightarrow a hence
limxa+f(x)g(x)=limxa+f(x)g(x) \displaystyle\lim_{x\to a+} \dfrac{f(x)}{g(x)} = \displaystyle\lim_{x\to a+} \dfrac{f'(x)}{g'(x)}

A nice example of where the application may not be obvious is in evaluating limx+x1+x2 \displaystyle\lim_{x\to + \infty} x- \sqrt{1+x^{2}}
Reply 85
Original post by EnglishMuon
...


Whilst what you've written is true in the first section, it's a consequence of the mean value theorem and not actually the mean value theorem itself, isn't it? (I might be being stupid here).

The way I think of the mean value theorem is that "okay, given an interval there's one point on the curve that 'gets lucky' and has gradient that's the gradient formed by just drawing the secant line there". So, the mean value theorem states that

f(c)=f(b)f(a)ba\displaystyle f'(c) = \frac{f(b) - f(a)}{b-a} where f is some continuous function in the interval [a,b], differentiable in (a,b) and a < c < b.

So whilst this certainly does imply that f(x)g(x)=\frac{f'(x)}{g'(x)} = \cdots \, \, \, it's not actually the theorem itself.
Original post by Zacken
Whilst what you've written is true in the first section, it's a consequence of the mean value theorem and not actually the mean value theorem itself, isn't it? (I might be being stupid here).

The way I think of the mean value theorem is that "okay, given an interval there's one point on the curve that 'gets lucky' and has gradient that's the gradient formed by just drawing the secant line there". So, the mean value theorem states that

f(c)=f(b)f(a)ba\displaystyle f'(c) = \frac{f(b) - f(a)}{b-a} where f is some continuous function in the interval [a,b], differentiable in (a,b) and a < c < b.

So whilst this certainly does imply that f(x)g(x)=\frac{f'(x)}{g'(x)} = \cdots \, \, \, it's not actually the theorem itself.


Yep, I just think it's nice to see where the mean value theorem comes from itself. I mean I could carry on working backwards and derive rolles theorem etc. And explain what a limit actually is but it seems the mean value theorem is effectively a direct step in the proof. I suppose it's just that the mean value theorem has many other uses so its labelled separately.
Reply 87
Original post by EnglishMuon
Yep, I just think it's nice to see where the mean value theorem comes from itself. I mean I could carry on working backwards and derive rolles theorem etc. And explain what a limit actually is but it seems the mean value theorem is effectively a direct step in the proof. I suppose it's just that the mean value theorem has many other uses so its labelled separately.


Yeah, okay, fair enough. I was just a little confused because you used it under the title of "mean value theorem" so just wanted to check. :yep:
Original post by Zacken
Yeah, okay, fair enough. I was just a little confused because you used it under the title of "mean value theorem" so just wanted to check. :yep:


Ah no problem, I sometimes merge them together in my head so it was worth checking :tongue:

You seem to know a decent amount of analysis for someone who doesn't know any analysis. :wink:
(edited 7 years ago)
Day 19 Summary
A standard technique but one which people often seem to forget about is partial fractions with repeating roots and some of its applications- I am even guilty myself of messing this up once or twice so hopefully I won't again after writing this!
Firstly, remember the straight forwards rule about repeating roots that
f(x)g(x)(xa)nAxa+B(xa)2+...+K(xa)n \dfrac{f(x)}{g(x)(x-a)^{n}} \equiv \dfrac{A}{x-a} + \dfrac{B}{(x-a)^{2}} + ... + \dfrac{K}{(x-a)^{n}} + the decomposition of g(x) into linear factors.
Also, remember the fraction associated with the factor (xn+a) (x^{n}+a) is of the form K1+K2x+...+Kn1xn1xn+a \dfrac{K_{1}+K_{2}x+...+K_{n-1}x^{n-1}}{x^{n}+a}

In the first example, if the order of f(x) is greater than the order of g(x)(xa)n g(x)(x-a)^{n} , use algebraic long division first!

Partial fractions are used Extremely often for infinite series. For example, consider the abomination that is
f(x)=x52x4+x3+x+5x32x2+x2 f(x)= \dfrac{x^{5}-2x^{4}+x^{3}+x+5}{x^{3}-2x^{2}+x-2} . By dividing out and applying the partial fraction techniques above, we see that
f(x)=x2+3x2x+1x2+1 f(x)= x^{2}+ \dfrac{3}{x-2} - \dfrac{x+1}{x^2+1}
We can use this form to our advantage, e.g. work out the coefficient of xn x^{n} .
The preceding working may then start by
3(x2)1=32(10.5x)1 3(x-2)^{-1}= - \frac{3}{2}(1-0.5x)^{-1} so the general term of this factor is...
(edited 7 years ago)
Day 20 Summary
Even though more advanced matrix techniques are not of the Alevel/STEP syllabus, knowledge of some of them can prove extremely useful!
Here are a few nice techniques in finding the determinant of any nxn Matrix:
Let A \mathbf{A} be a square nxn n \mathrm{x} n matrix. Let B \mathbf{B} be the matrix by swapping any two rows or two columns (only 1 pair).
Then det(b)=det(A) det( \mathbf{b})= -det( \mathbf{A})
Let C \mathbf{C} be formed by multiplying a row by the scalar α \alpha and adding it to another row, or multiplying a column and adding to another column.
Then det(B)=det(A) det( \mathbf{B}) = det( \mathbf{A})
And also the "sensicle yet could save a lot of time" theorem:
If A \mathbf{A} has two equal rows or two equal columns (i.e. the elements are the same), then det(A)=0 det( \mathbf{A})=0

(Please excuse my italicised 'dets', I have broken a finger so reluctant to type any further! :tongue:)
(edited 7 years ago)
Day 21-23 Summary
After having a little bit of a break for the past few days, I thought I would explore some of the more interesting STEP qs out there! I also believe reviewing my work and having other people comment on it with their differing thought processes help me improve further!
So I thought I would write up my solution to the beastly question that is Q3 STEP III 1989 and share some other thoughts on it, especially since I have not seen a solution online :smile:

The matrix M \mathbf{M} is given by
M=(cos2πmsin2πmsin2πmcos2πm) \mathbf{M} = \begin{pmatrix} \mathrm{cos} \dfrac{2 \pi}{m} & - \mathrm{sin} \dfrac{2 \pi}{m} \\ \mathrm{sin} \dfrac{2 \pi}{m} & \mathrm{cos} \dfrac{2 \pi}{m} \end{pmatrix} . Prove that Mm1+Mm2+...+M2+M+I=O \mathbf{M}^{m-1} + \mathbf{M}^{m-2} +...+\mathbf{M}^{2} + \mathbf{M} + \mathbf{I} = \mathbf{O}
A few ideas came to mind when first seeing this question. Firstly, the form above is very reminiscent of a geometric progression, however it would illogical to apply the summation formula in this case as I have no reason for why it should hold (especially as I would have to replace 1 with I \mathbf{I} to get the required answer). Also, it's clear M \mathbf{M} is a rotation matrix- this therefore reminded me of the m mth roots of unity.
For example, if zm=1,z=e2πkin z^m=1, z= e^{ \frac{2 \pi ki}{n}} . By then writing zm1=0 z^m-1=0 as a product of factors, we can see the coefficient of zm1 z^{m-1} is the sum of the roots, but must equal 0 i.e. let the first root by α \alpha . Then αm1+αm2+...+α+1=0 \alpha^{m-1} + \alpha^{m-2} + ...+ \alpha + 1 =0 which is very similar to the expression in the question. The idea that the product of roots of unity can represent a rotation makes me think there is a deeper relevance to complex numbers occurring here.
Anyway, there were two ways I thought worked here. Firstly, writing Mm1+Mm2+...+M2+M+I=(i=0m1cos2πii=0m1sin2πii=0m1sin2πii=0m1cos2πi)\mathbf{M}^{m-1} + \mathbf{M}^{m-2} +...+\mathbf{M}^{2} + \mathbf{M} + \mathbf{I} = \begin{pmatrix} \displaystyle \sum_{i=0}^{m-1} \mathrm{cos} \dfrac{2 \pi}{i} & - \displaystyle \sum_{i=0}^{m-1} \mathrm{sin} \dfrac{2 \pi}{i} \\ \displaystyle \sum_{i=0}^{m-1} \mathrm{sin} \dfrac{2 \pi}{i} & \displaystyle \sum_{i=0}^{m-1} \mathrm{cos} \dfrac{2 \pi}{i} \end{pmatrix}
seems to get the answer almost straight away by arguing that cos(πθ)=cosθ,cos(2πθ)=cosθi=0m1cos2πi=0 cos( \pi - \theta) = -cos \theta, cos(2 \pi - \theta) = cos \theta \Rightarrow \displaystyle \sum_{i=0}^{m-1} \mathrm{cos} \dfrac{2 \pi}{i}=0 and sin(2πθ)=sinθi=0m1sin2πi=0 sin(2 \pi - \theta)= -sin \theta \Rightarrow \displaystyle \sum_{i=0}^{m-1} \mathrm{sin} \dfrac{2 \pi}{i}=0 as the later terms will cancel with the earlier terms in each summation (hence we get the zero matrix).
I am unsure though on whether this would be enough working to convince an examiner even though I'm sure this works.
We could however proceed by the laborious process of induction to prove that Mk=(cos2kπmsin2kπmsin2kπmcos2kπm) \mathbf{M}^{k} = \begin{pmatrix} \mathrm{cos} \dfrac{2k \pi}{m} & - \mathrm{sin} \dfrac{2k \pi}{m} \\ \mathrm{sin} \dfrac{2k \pi}{m} & \mathrm{cos} \dfrac{2k \pi}{m} \end{pmatrix} (which, again, could be deduced from noting its a rotation matrix but not sure if it is 'concrete' enough). I think we could then say MmI=O(MI)(Mm1+Mm2+...+M2+M+I)=0 \mathbf{M}^{m} - \mathbf{I}= \mathbf{O} \Rightarrow (\mathbf{M}- \mathbf{I})( \mathbf{M}^{m-1} + \mathbf{M}^{m-2} +...+\mathbf{M}^{2} + \mathbf{M} + \mathbf{I}) =0 . As MI \mathbf{M} \not= \mathbf{I} , the second bracket must equal 0.

Xk+1=PXk+Q \mathbf{X}_{k+1} = \mathbf{PX_{k}} + \mathbf{Q} .
Find and prove an expression for Xk \mathbf{X}_{k} in terms of X0,P,Q \mathbf{X}_{0}, \mathbf{P}, \mathbf{Q} .

After looking at a couple of terms its easy to see and prove that Xk=PkX0+Qi=0k1Pi \mathbf{X}_{k} = \mathbf{P}^{k} \mathbf{X}_{0} + \mathbf{Q} \displaystyle\sum_{i=0}^{k-1} \mathbf{P}^{i} (hopefully thats right! :tongue:)

Now for the Group Theory!
NOTE: It turns out I misread the question and I thought we were substituting X0 \mathbf{X}_{0} for Xi \mathbf{X}_{i} and not the other way round, however I still believe it works (and the actual workings would be very similar).
The binary operation * is defined as follows: XiXj \mathbf{X}_{i} * \mathbf{X}_{j} is the result of substituting XjforX0 \mathbf{X}_{j} for \mathbf{X}_{0} in Xi \mathbf{X}_{i} . Show that if P=M \mathbf{P} = \mathbf{M} , the set [X1,X2,...]=χ [ \mathbf{X}_{1}, \mathbf{X}_{2}, ... ] = \chi forms a finite group under *.

It will be useful to write Xi \mathbf{X}_{i} in terms of Xj \mathbf{X}_{j} : By using the recurrence relation and 'working backwards' from i to j, we can see Xi=PijXj+Qr=0ij1Pr \mathbf{X}_{i} = \mathbf{P}^{i-j} \mathbf{X}_{j} + \mathbf{Q} \displaystyle\sum_{r=0}^{i-j-1} \mathbf{P}^{r}
Hence
XiXj=(PijXj+Qr=0ij1Pr)Xj=PijX0+Qr=0ij1Pr[br]=Xij \mathbf{X}_{i} * \mathbf{X}_{j} = ( \mathbf{P}^{i-j} \mathbf{X}_{j} + \mathbf{Q} \displaystyle\sum_{r=0}^{i-j-1} \mathbf{P}^{r})* \mathbf{X}_{j} = \mathbf{P}^{i-j} \mathbf{X}_{0} + \mathbf{Q} \displaystyle\sum_{r=0}^{i-j-1} \mathbf{P}^{r}[br]= \mathbf{X}_{i-j} (if ij i \geq j , if not the resulting element is Xi \mathbf{X}_{i} still.)
Note how then XiX0=Xi \mathbf{X}_{i} * \mathbf{X}_{0} = \mathbf{X}_{i} so an identity element exists.
Also, XiXi=X0 \mathbf{X}_{i} * \mathbf{X}_{i} = \mathbf{X}_{0} which is our identity element so a unique inverse exists for each element.
XiXj=XijorXiχ \mathbf{X}_{i} * \mathbf{X}_{j} = \mathbf{X}_{i-j} or \mathbf{X}_{i} \in \chi so our group is closed under the product. i-j is always less than or equal to i so our group is finite so long as we have a fixed max i value to start with.
The only downfall to this reading mistake is that associativity does not hold:
Xi(XjXk)=XiXjk=Xij+k(XiXj)Xk \mathbf{X}_{i} * ( \mathbf{X}_{j} * \mathbf{X}_{k})= \mathbf{X}_{i} * \mathbf{X}_{j-k} = \mathbf{X}_{i-j+k} \not= ( \mathbf{X}_{i}* \mathbf{X}_{j}) * \mathbf{X}_{k} .
Repeating similar arguments with X0 and Xi the other way round should show χ \chi is a finite group under *!.
@Zacken Good luck reading this in one go :wink: Any advice is appreciated!
Reply 92
Original post by EnglishMuon
...


Currently doing this question and I've just finished the first "prove that" bit, so I thought I'd come here and read your bit on it. I had something slightly similar:

1. It's obviously a rotation so let's see if I can work with that. Hmmm, well, rotating m-1 anticlockwise is the same as rotating m clockwise (you know what I mean), could I pair these terms up pairwise? Is there some condition on m being even/odd only? *check* nope, damn. I'll have to split casewise... this is getting too long, let me give up this approach.

2. Ah! They want induction, obvioooously. *starts writing out induction* lol nopes this isn't gonna work.

3. Okay, so... what series starts out with a number and then keeps that number constant whilst increasing the power. Is this the exponential series? Naaah, you cray m8, no factorials. AHHH!!!! It's a geometric series you retard.

Okay, so:
Unparseable latex formula:

\displaystyle[br]\begin{equation*} \mathbf{I} + \mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m-1} = \mathbf{I}\left(\mathrbf{I} - \mathbf{M}^m\right)(\mathbf{I}-\mathbf{M})^{-1} \end{equation*}

.

Since MIM \neq I then IMm=O\mathbf{I} - \mathbf{M}^m = \mathbf{O} as required.
Original post by Zacken
Currently doing this question and I've just finished the first "prove that" bit, so I thought I'd come here and read your bit on it. I had something slightly similar:

1. It's obviously a rotation so let's see if I can work with that. Hmmm, well, rotating m-1 anticlockwise is the same as rotating m clockwise (you know what I mean), could I pair these terms up pairwise? Is there some condition on m being even/odd only? *check* nope, damn. I'll have to split casewise... this is getting too long, let me give up this approach.
EDIT: It works now :smile:
2. Ah! They want induction, obvioooously. *starts writing out induction* lol nopes this isn't gonna work.

3. Okay, so... what series starts out with a number and then keeps that number constant whilst increasing the power. Is this the exponential series? Naaah, you cray m8, no factorials. AHHH!!!! It's a geometric series you retard.

Okay, so:
Unparseable latex formula:

\displaystyle[br]\begin{equation*} \mathbf{I} + \mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m-1} = \mathbf{I}\left(\mathrbf{I} - \mathbf{M}^m\right)(\mathbf{I}-\mathbf{M})^{-1} \end{equation*}

.

Since MIM \neq I then IMm=O\mathbf{I} - \mathbf{M}^m = \mathbf{O} as required.


XD yea I did change my mind a fair few times when doing this! Apparently your latex is 'dangerous' (which is understandable when looking at the question that it's based upon :wink:. So did you just apply geometric series straight up or argue that since its a rotation matrix you can apply it, first?
(edited 7 years ago)
Reply 94
Original post by EnglishMuon
XD yea I did change my mind a fair few times when doing this! Apparently your latex is 'dangerous' (which is understandable when looking at the question that it's based upon :wink:. So did you just apply geometric series straight up or argue that since its a rotation matrix you can apply it, first?


It doesn't need to be a rotation matrix for it to be a geomtrical series!
Original post by Zacken
It doesn't need to be a rotation matrix for it to be a geomtrical series!

Hmm yep ok, that makes sense!. I thought it did have to be a type of matrix like the rotation matrix in that successive powers produce 'linear effects', but I get your point that if we apply slight variation on the normal geometric series proof, we can get the same result independent of the type of matrix we are talking about. I guess you were thinking along the lines of Sn=I+M+M2++Mm1[br]SnMSn=I+M+M2++Mm1(M+M2++Mm)=IMm[br]Sn=(IM)1(IMm) \mathbf{S}_{n}= \mathbf{I} + \mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m-1}[br]\Rightarrow \mathbf{S}_{n}- \mathbf{M} \mathbf{S}_{n} = \mathbf{I} + \mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m-1} - (\mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m}) = \mathbf{I} - \mathbf{M}^{m} [br]\Rightarrow \mathbf{S}_{n} = ( \mathbf{I} - \mathbf{M})^{-1}( \mathbf{I} - \mathbf{M}^{m}) .
Also, just to check in your earlier post you said Sn=(IMm)(IM)1 \mathbf{S}_{n} = ( \mathbf{I} - \mathbf{M}^{m})( \mathbf{I} - \mathbf{M})^{-1} . Im probably wrong but does that form only hold if MSn=SnM \mathbf{M} \mathbf{S}_{n} = \mathbf{S}_{n} \mathbf{M} as in the proof above, we have SnMSn=... \mathbf{S}_{n}- \mathbf{M} \mathbf{S}_{n}=... not SnSnM=... \mathbf{S}_{n}- \mathbf{S}_{n} \mathbf{M}=... so the factorisation is different?
Reply 96
Original post by EnglishMuon
Hmm yep ok, that makes sense!. I thought it did have to be a type of matrix like the rotation matrix in that successive powers produce 'linear effects', but I get your point that if we apply slight variation on the normal geometric series proof, we can get the same result independent of the type of matrix we are talking about. I guess you were thinking along the lines of Sn=I+M+M2++Mm1[br]SnMSn=I+M+M2++Mm1(M+M2++Mm)=IMm[br]Sn=(IM)1(IMm) \mathbf{S}_{n}= \mathbf{I} + \mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m-1}[br]\Rightarrow \mathbf{S}_{n}- \mathbf{M} \mathbf{S}_{n} = \mathbf{I} + \mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m-1} - (\mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m}) = \mathbf{I} - \mathbf{M}^{m} [br]\Rightarrow \mathbf{S}_{n} = ( \mathbf{I} - \mathbf{M})^{-1}( \mathbf{I} - \mathbf{M}^{m}) .
Also, just to check in your earlier post you said Sn=(IMm)(IM)1 \mathbf{S}_{n} = ( \mathbf{I} - \mathbf{M}^{m})( \mathbf{I} - \mathbf{M})^{-1} . Im probably wrong but does that form only hold if MSn=SnM \mathbf{M} \mathbf{S}_{n} = \mathbf{S}_{n} \mathbf{M} as in the proof above, we have SnMSn=... \mathbf{S}_{n}- \mathbf{M} \mathbf{S}_{n}=... not SnSnM=... \mathbf{S}_{n}- \mathbf{S}_{n} \mathbf{M}=... so the factorisation is different?


I meant to do your form but I flipped it around because I was rushing out and was typing that as I headed out of the door. Yours is the correct one. Although, it doesn't really matter in this problem because one of the matrices is the zero matrix which is trivially commutative with any other matrix.

I haven't done the rest of the question 'cause I had to go out, but I'm looking forward to doing the next part in like an hour or so. :biggrin:

You been doing any other old interesting STEP q's?
Original post by Zacken
I meant to do your form but I flipped it around because I was rushing out and was typing that as I headed out of the door. Yours is the correct one. Although, it doesn't really matter in this problem because one of the matrices is the zero matrix which is trivially commutative with any other matrix.

I haven't done the rest of the question 'cause I had to go out, but I'm looking forward to doing the next part in like an hour or so. :biggrin:

You been doing any other old interesting STEP q's?


Oh yep, thanks for the help! Ive had a look at some of the other qs on that paper too, question 5 is quite nice although the ideas behind it seem fairly straight forwards :smile:
Day 24 Summary
After completing STEP II 2007 today, it turns out I could have narrowly missed out on an S solely by stupid mistakes! (Question specifics may be shown below)
Consider the question
By considering the derivatives of ln(x+3+x2) ln(x+ \sqrt{3+x^2} ) and x3+x2 x \sqrt{3+x^2} , find 3+x2 dx \displaystyle\int \sqrt{3+x^2} \ dx
Its straight forwards to show the log differentiates to x+3+x23+x2+x3+x2 \dfrac{x+ \sqrt{3+x^2}}{3+x^{2}+x \sqrt{3+x^{2}}} and the other root differentiates to 3+2x23+x2 \dfrac{3+2x^{2}}{ \sqrt{3+x^{2}}} .
I immediately knew that I could rewrite 3+x2 dx \displaystyle\int \sqrt{3+x^2} \ dx as 3+2x23+x2x23+x2 dx \displaystyle\int \dfrac{3+2x^{2}}{ \sqrt{3+x^{2}}} - \dfrac{x^2}{ \sqrt{3+x^2}} \ dx - the first term of which can be easily integrated using the second deriv. above, but the second was not so obvious. That lead me to think "Maybe my first derivative fraction can simplify to give me the one in the integral?". For some reason, I missed the obvious fact that x+3+x23+x2+x3+x2=x+3+x2(x+3+x2)(3+x2)=13+x2 \dfrac{x+ \sqrt{3+x^2}}{3+x^{2}+x \sqrt{3+x^{2}}} = \dfrac{x+ \sqrt{3+x^2}}{(x+ \sqrt{3+x^2})( \sqrt{3+x^2})} = \dfrac{1}{ \sqrt{3+x^2}} even though I knew this was exactly what I wanted to get too! It seems like such a dumb mistake, and could have cost me many marks as I ended up using a sub. instead to find the integral which isnt what they asked for. Effectively, If I followed through with my intuition I would definitely have an S in this paper, so I will stick with gut feeling in future!!!
Reply 99
Original post by EnglishMuon
Now for the Group Theory!


Okay, I've gotten that the identity element is X0\mathbf{X}_0. I'm unclear as to how you got each element to be its self-inverse though, why exactly is XiXi=X0\mathbf{X}_i \star \mathbf{X}_i = \mathbf{X}_0? :redface:

Quick Reply

Latest