Hey there! Sign in to join this conversationNew here? Join for free
    Offline

    22
    ReputationRep:
    (Original post by EnglishMuon)
    Similar techniques can be used to show the other common properties too.
    This could make for a good STEP III question if adapted correctly, something like that question in III 1998 about the beta function in disguise!



    If we have the vector equation  \lambda \mathbf{a}+ \mu \mathbf{b} =0 (where lambda, mu are scalars), we can only say this equation implies  \mathbf{a}=0 , \mathbf{b} =0 iff a and b are linearly dependent (i.e.  \mathbf{a} \cdot \mathbf{b} = 0 ) as otherwise a could be expressed in terms of b and vice versa. This can be used in proof by contradictions- e.g. showing some expression=0 iff a and b satisfy these properties.
    Did you mean independent for the bit I bolded? Do you have an example of this, I think I kind of get the gist but I'm not entirely sure.
    • Thread Starter
    Offline

    14
    ReputationRep:
    (Original post by Zacken)
    This could make for a good STEP III question if adapted correctly, something like that question in III 1998 about the beta function in disguise!





    Did you mean independent for the bit I bolded? Do you have an example of this, I think I kind of get the gist but I'm not entirely sure.
    lol yep, sorry about the typo, I was in a rush for food . Ill add an example after dinner!
    • Thread Starter
    Offline

    14
    ReputationRep:
    Day 16 Summary
    One important thing for all exams is coping with stress, during revision and the exam itself. I think I need to remember the situation is not as bad as it may first seem- for example today, I went to pieces during a STEP III paper. It all started when I went blank and missed the extremely basic idea of rewriting complex conjugates:
    Let  \alpha= e^{ \frac{2 \pi i}{7}} . Then the roots to the quadratic equation  z^{2}+z+2=0 are  \alpha + \alpha^{2} +\alpha^{4} and the complex conjugate of this root.
    But suppose we wanted to write this other conjugate root in terms of alphas, then we do the obvious and reflect in the real axis via  ( \alpha + \alpha^{2} +\alpha^{4} )^{*} = \alpha^{*} + \alpha^{2} ^{*} +\alpha^{4} ^{*} . E.g.   ( \alpha)^{*}= e^{ \frac{-2 \pi i}{7}+2n \pi} = e^{ \frac{5 \pi i}{7}} = \alpha^{5} . So we did something elementary to something obvious and ended up with something easy, and yet somehow it brought me to the verge of tears.
    Sure this could well be the worst and wimpiest story of all time, but I thought it was a good reminder for myself in future. I think I still scraped an S on that paper so it goes to show I should stop being a moany old floppy lemon and just get on with it!
    • Thread Starter
    Offline

    14
    ReputationRep:
    Day 17 Summary
    A 'not specifically required' but extremely common and pops up everywhere technique for step is knowledge of the triangle inequality,  |a+b| \leq |a|+|b| for any real a,b.
    The origin of it comes from looking at two vectors and their resultant combined vector- this can be seen by drawing a parallelogram and noting the diagonal (mod (a+b) ) is always less than or equal to the sum of each side length (and the equality case only occurs when the parallelogram has 0 area i.e. the vectors are scalar multiples of eachother).

    One thing I didnt think about for a while though is a concrete proof of this, so here is my attempt(s):

    By definition,  |x| = max(x,-x)
    hence
     a^{2}+b^{2}+2|a||b| \geq a^{2}+b^{2} +2ab 

\Rightarrow (|a|+|b|)^{2} \geq |a+b|^{2}

\Rightarrow  |a+b| \leq |a|+|b|
    • Thread Starter
    Offline

    14
    ReputationRep:
    Day 18 Summary
    A lovely proof that reminds me of the proper structure proofs should have is of L'Hopitals (insert circumflex where appropriate) rule. This is one of the first (and few) formal limit based questions I managed to (mostly) solve myself so hopefully it makes sense!

    Mean Value Theorem

    Suppose that f and g are continuous functions on the interval  [a,b] , differentiable on  (a,b) and  g'(x) \not= 0 \forall x \in (a,b)
    Let's choose a constant h so that  F=f+hg satisfies  F(a)=F(b) (which will then allow us to apply Rolle's theorem).
    So
     f(a)+hg(a)=f(b)+hg(b) \Rightarrow h= - \dfrac{f(b)-f(a)}{g(b)-g(a)}
    Then by Rolle's theorem,  \exists \xi \in (a,b) such that  0=F'( \xi)= f'( \xi)+ hg'( \xi) \Rightarrow h= - \dfrac{f'( \xi)}{ g'( \xi)} .

    i.e.  \dfrac{f'( \xi)}{ g'( \xi)} = \dfrac{f(b)-f(a)}{g(b)-g(a)} for some xi.

    L'Hopitals Rule

    Now suppose  f(a)=g(a)=0 .
    Then
     \displaystyle\lim_{x\to a+} \dfrac{f(x)}{g(x)} = \displaystyle\lim_{x\to a+} \dfrac{f(x)-0}{g(x)-0} = \displaystyle\lim_{x\to a+} \dfrac{f(x)-f(a)}{g(x)-g(a)}
    But  \xi \rightarrow a as  b \rightarrow a hence
     \displaystyle\lim_{x\to a+} \dfrac{f(x)}{g(x)} = \displaystyle\lim_{x\to a+} \dfrac{f'(x)}{g'(x)}

    A nice example of where the application may not be obvious is in evaluating  \displaystyle\lim_{x\to + \infty} x- \sqrt{1+x^{2}}
    Offline

    22
    ReputationRep:
    (Original post by EnglishMuon)
    ...
    Whilst what you've written is true in the first section, it's a consequence of the mean value theorem and not actually the mean value theorem itself, isn't it? (I might be being stupid here).

    The way I think of the mean value theorem is that "okay, given an interval there's one point on the curve that 'gets lucky' and has gradient that's the gradient formed by just drawing the secant line there". So, the mean value theorem states that

    \displaystyle f'(c) = \frac{f(b) - f(a)}{b-a} where f is some continuous function in the interval [a,b], differentiable in (a,b) and a < c < b.

    So whilst this certainly does imply that \frac{f'(x)}{g'(x)} = \cdots \, \, \, it's not actually the theorem itself.
    • Thread Starter
    Offline

    14
    ReputationRep:
    (Original post by Zacken)
    Whilst what you've written is true in the first section, it's a consequence of the mean value theorem and not actually the mean value theorem itself, isn't it? (I might be being stupid here).

    The way I think of the mean value theorem is that "okay, given an interval there's one point on the curve that 'gets lucky' and has gradient that's the gradient formed by just drawing the secant line there". So, the mean value theorem states that

    \displaystyle f'(c) = \frac{f(b) - f(a)}{b-a} where f is some continuous function in the interval [a,b], differentiable in (a,b) and a < c < b.

    So whilst this certainly does imply that \frac{f'(x)}{g'(x)} = \cdots \, \, \, it's not actually the theorem itself.
    Yep, I just think it's nice to see where the mean value theorem comes from itself. I mean I could carry on working backwards and derive rolles theorem etc. And explain what a limit actually is but it seems the mean value theorem is effectively a direct step in the proof. I suppose it's just that the mean value theorem has many other uses so its labelled separately.
    Offline

    22
    ReputationRep:
    (Original post by EnglishMuon)
    Yep, I just think it's nice to see where the mean value theorem comes from itself. I mean I could carry on working backwards and derive rolles theorem etc. And explain what a limit actually is but it seems the mean value theorem is effectively a direct step in the proof. I suppose it's just that the mean value theorem has many other uses so its labelled separately.
    Yeah, okay, fair enough. I was just a little confused because you used it under the title of "mean value theorem" so just wanted to check. :yep:
    • Thread Starter
    Offline

    14
    ReputationRep:
    (Original post by Zacken)
    Yeah, okay, fair enough. I was just a little confused because you used it under the title of "mean value theorem" so just wanted to check. :yep:
    Ah no problem, I sometimes merge them together in my head so it was worth checking

    You seem to know a decent amount of analysis for someone who doesn't know any analysis.
    • Thread Starter
    Offline

    14
    ReputationRep:
    Day 19 Summary
    A standard technique but one which people often seem to forget about is partial fractions with repeating roots and some of its applications- I am even guilty myself of messing this up once or twice so hopefully I won't again after writing this!
    Firstly, remember the straight forwards rule about repeating roots that
     \dfrac{f(x)}{g(x)(x-a)^{n}} \equiv \dfrac{A}{x-a} + \dfrac{B}{(x-a)^{2}} + ... + \dfrac{K}{(x-a)^{n}} + the decomposition of g(x) into linear factors.
    Also, remember the fraction associated with the factor  (x^{n}+a) is of the form  \dfrac{K_{1}+K_{2}x+...+K_{n-1}x^{n-1}}{x^{n}+a}

    In the first example, if the order of f(x) is greater than the order of  g(x)(x-a)^{n} , use algebraic long division first!

    Partial fractions are used Extremely often for infinite series. For example, consider the abomination that is
     f(x)= \dfrac{x^{5}-2x^{4}+x^{3}+x+5}{x^{3}-2x^{2}+x-2} . By dividing out and applying the partial fraction techniques above, we see that
     f(x)= x^{2}+ \dfrac{3}{x-2} - \dfrac{x+1}{x^2+1}
    We can use this form to our advantage, e.g. work out the coefficient of  x^{n} .
    The preceding working may then start by
     3(x-2)^{-1}= - \frac{3}{2}(1-0.5x)^{-1} so the general term of this factor is...
    • Thread Starter
    Offline

    14
    ReputationRep:
    Day 20 Summary
    Even though more advanced matrix techniques are not of the Alevel/STEP syllabus, knowledge of some of them can prove extremely useful!
    Here are a few nice techniques in finding the determinant of any nxn Matrix:
    Let  \mathbf{A} be a square  n \mathrm{x} n matrix. Let  \mathbf{B} be the matrix by swapping any two rows or two columns (only 1 pair).
    Then  det( \mathbf{b})= -det( \mathbf{A})
    Let  \mathbf{C} be formed by multiplying a row by the scalar  \alpha and adding it to another row, or multiplying a column and adding to another column.
    Then  det( \mathbf{B}) = det( \mathbf{A})
    And also the "sensicle yet could save a lot of time" theorem:
    If  \mathbf{A} has two equal rows or two equal columns (i.e. the elements are the same), then  det( \mathbf{A})=0

    (Please excuse my italicised 'dets', I have broken a finger so reluctant to type any further! )
    • Thread Starter
    Offline

    14
    ReputationRep:
    Day 21-23 Summary
    After having a little bit of a break for the past few days, I thought I would explore some of the more interesting STEP qs out there! I also believe reviewing my work and having other people comment on it with their differing thought processes help me improve further!
    So I thought I would write up my solution to the beastly question that is Q3 STEP III 1989 and share some other thoughts on it, especially since I have not seen a solution online

    The matrix  \mathbf{M} is given by
     \mathbf{M} = \begin{pmatrix} \mathrm{cos} \dfrac{2 \pi}{m} & - \mathrm{sin} \dfrac{2 \pi}{m} \\  \mathrm{sin} \dfrac{2 \pi}{m} & \mathrm{cos} \dfrac{2 \pi}{m} \end{pmatrix} . Prove that  \mathbf{M}^{m-1} + \mathbf{M}^{m-2} +...+\mathbf{M}^{2} + \mathbf{M} + \mathbf{I} = \mathbf{O}
    A few ideas came to mind when first seeing this question. Firstly, the form above is very reminiscent of a geometric progression, however it would illogical to apply the summation formula in this case as I have no reason for why it should hold (especially as I would have to replace 1 with  \mathbf{I} to get the required answer). Also, it's clear  \mathbf{M} is a rotation matrix- this therefore reminded me of the  mth roots of unity.
    For example, if  z^m=1, z= e^{ \frac{2 \pi ki}{n}} . By then writing  z^m-1=0 as a product of factors, we can see the coefficient of  z^{m-1} is the sum of the roots, but must equal 0 i.e. let the first root by  \alpha . Then  \alpha^{m-1} + \alpha^{m-2} + ...+ \alpha + 1 =0 which is very similar to the expression in the question. The idea that the product of roots of unity can represent a rotation makes me think there is a deeper relevance to complex numbers occurring here.
    Anyway, there were two ways I thought worked here. Firstly, writing \mathbf{M}^{m-1} + \mathbf{M}^{m-2} +...+\mathbf{M}^{2} + \mathbf{M} + \mathbf{I} = \begin{pmatrix} \displaystyle \sum_{i=0}^{m-1} \mathrm{cos} \dfrac{2 \pi}{i} & - \displaystyle \sum_{i=0}^{m-1} \mathrm{sin} \dfrac{2 \pi}{i} \\ \displaystyle \sum_{i=0}^{m-1} \mathrm{sin} \dfrac{2 \pi}{i} & \displaystyle \sum_{i=0}^{m-1} \mathrm{cos}  \dfrac{2 \pi}{i} \end{pmatrix}
    seems to get the answer almost straight away by arguing that  cos( \pi - \theta) = -cos \theta, cos(2 \pi - \theta) = cos \theta \Rightarrow \displaystyle \sum_{i=0}^{m-1} \mathrm{cos} \dfrac{2 \pi}{i}=0 and  sin(2 \pi - \theta)= -sin \theta \Rightarrow \displaystyle \sum_{i=0}^{m-1} \mathrm{sin} \dfrac{2 \pi}{i}=0 as the later terms will cancel with the earlier terms in each summation (hence we get the zero matrix).
    I am unsure though on whether this would be enough working to convince an examiner even though I'm sure this works.
    We could however proceed by the laborious process of induction to prove that  \mathbf{M}^{k} = \begin{pmatrix} \mathrm{cos} \dfrac{2k \pi}{m} & - \mathrm{sin} \dfrac{2k \pi}{m} \\ \mathrm{sin} \dfrac{2k \pi}{m} & \mathrm{cos} \dfrac{2k \pi}{m} \end{pmatrix} (which, again, could be deduced from noting its a rotation matrix but not sure if it is 'concrete' enough). I think we could then say  \mathbf{M}^{m} - \mathbf{I}= \mathbf{O} \Rightarrow (\mathbf{M}- \mathbf{I})( \mathbf{M}^{m-1} + \mathbf{M}^{m-2} +...+\mathbf{M}^{2} + \mathbf{M} + \mathbf{I}) =0 . As  \mathbf{M} \not= \mathbf{I} , the second bracket must equal 0.

     \mathbf{X}_{k+1} = \mathbf{PX_{k}} + \mathbf{Q} .
    Find and prove an expression for  \mathbf{X}_{k} in terms of  \mathbf{X}_{0}, \mathbf{P}, \mathbf{Q} .

    After looking at a couple of terms its easy to see and prove that  \mathbf{X}_{k} = \mathbf{P}^{k} \mathbf{X}_{0} + \mathbf{Q} \displaystyle\sum_{i=0}^{k-1} \mathbf{P}^{i} (hopefully thats right! )

    Now for the Group Theory!
    NOTE: It turns out I misread the question and I thought we were substituting   \mathbf{X}_{0} for   \mathbf{X}_{i} and not the other way round, however I still believe it works (and the actual workings would be very similar).
    The binary operation * is defined as follows:  \mathbf{X}_{i} * \mathbf{X}_{j} is the result of substituting  \mathbf{X}_{j}  for \mathbf{X}_{0} in  \mathbf{X}_{i} . Show that if  \mathbf{P} = \mathbf{M} , the set  [ \mathbf{X}_{1}, \mathbf{X}_{2}, ... ] = \chi forms a finite group under *.

    It will be useful to write  \mathbf{X}_{i} in terms of  \mathbf{X}_{j} : By using the recurrence relation and 'working backwards' from i to j, we can see  \mathbf{X}_{i} = \mathbf{P}^{i-j} \mathbf{X}_{j} + \mathbf{Q} \displaystyle\sum_{r=0}^{i-j-1} \mathbf{P}^{r}
    Hence
      \mathbf{X}_{i} * \mathbf{X}_{j} = ( \mathbf{P}^{i-j} \mathbf{X}_{j} + \mathbf{Q} \displaystyle\sum_{r=0}^{i-j-1} \mathbf{P}^{r})* \mathbf{X}_{j} = \mathbf{P}^{i-j} \mathbf{X}_{0} + \mathbf{Q} \displaystyle\sum_{r=0}^{i-j-1} \mathbf{P}^{r}

= \mathbf{X}_{i-j} (if  i \geq j , if not the resulting element is  \mathbf{X}_{i} still.)
    Note how then   \mathbf{X}_{i} *  \mathbf{X}_{0} =  \mathbf{X}_{i} so an identity element exists.
    Also,   \mathbf{X}_{i} *  \mathbf{X}_{i} =  \mathbf{X}_{0} which is our identity element so a unique inverse exists for each element.
     \mathbf{X}_{i} * \mathbf{X}_{j} = \mathbf{X}_{i-j} or \mathbf{X}_{i} \in \chi so our group is closed under the product. i-j is always less than or equal to i so our group is finite so long as we have a fixed max i value to start with.
    The only downfall to this reading mistake is that associativity does not hold:
     \mathbf{X}_{i} * ( \mathbf{X}_{j} * \mathbf{X}_{k})= \mathbf{X}_{i} * \mathbf{X}_{j-k} = \mathbf{X}_{i-j+k} \not= ( \mathbf{X}_{i}* \mathbf{X}_{j}) * \mathbf{X}_{k} .
    Repeating similar arguments with X0 and Xi the other way round should show  \chi is a finite group under *!.
    Zacken Good luck reading this in one go Any advice is appreciated!
    Offline

    22
    ReputationRep:
    (Original post by EnglishMuon)
    ...
    Currently doing this question and I've just finished the first "prove that" bit, so I thought I'd come here and read your bit on it. I had something slightly similar:

    1. It's obviously a rotation so let's see if I can work with that. Hmmm, well, rotating m-1 anticlockwise is the same as rotating m clockwise (you know what I mean), could I pair these terms up pairwise? Is there some condition on m being even/odd only? *check* nope, damn. I'll have to split casewise... this is getting too long, let me give up this approach.

    2. Ah! They want induction, obvioooously. *starts writing out induction* lol nopes this isn't gonna work.

    3. Okay, so... what series starts out with a number and then keeps that number constant whilst increasing the power. Is this the exponential series? Naaah, you cray m8, no factorials. AHHH!!!! It's a geometric series you retard.

    Okay, so: \displaystyle

\begin{equation*} \mathbf{I} + \mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m-1} = \mathbf{I}\left(\mathrbf{I} - \mathbf{M}^m\right)(\mathbf{I}-\mathbf{M})^{-1} \end{equation*}.

    Since M \neq I then \mathbf{I} - \mathbf{M}^m = \mathbf{O} as required.
    • Thread Starter
    Offline

    14
    ReputationRep:
    (Original post by Zacken)
    Currently doing this question and I've just finished the first "prove that" bit, so I thought I'd come here and read your bit on it. I had something slightly similar:

    1. It's obviously a rotation so let's see if I can work with that. Hmmm, well, rotating m-1 anticlockwise is the same as rotating m clockwise (you know what I mean), could I pair these terms up pairwise? Is there some condition on m being even/odd only? *check* nope, damn. I'll have to split casewise... this is getting too long, let me give up this approach.
    EDIT: It works now
    2. Ah! They want induction, obvioooously. *starts writing out induction* lol nopes this isn't gonna work.

    3. Okay, so... what series starts out with a number and then keeps that number constant whilst increasing the power. Is this the exponential series? Naaah, you cray m8, no factorials. AHHH!!!! It's a geometric series you retard.

    Okay, so: \displaystyle

\begin{equation*} \mathbf{I} + \mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m-1} = \mathbf{I}\left(\mathrbf{I} - \mathbf{M}^m\right)(\mathbf{I}-\mathbf{M})^{-1} \end{equation*}.

    Since M \neq I then \mathbf{I} - \mathbf{M}^m = \mathbf{O} as required.
    XD yea I did change my mind a fair few times when doing this! Apparently your latex is 'dangerous' (which is understandable when looking at the question that it's based upon . So did you just apply geometric series straight up or argue that since its a rotation matrix you can apply it, first?
    Offline

    22
    ReputationRep:
    (Original post by EnglishMuon)
    XD yea I did change my mind a fair few times when doing this! Apparently your latex is 'dangerous' (which is understandable when looking at the question that it's based upon . So did you just apply geometric series straight up or argue that since its a rotation matrix you can apply it, first?
    It doesn't need to be a rotation matrix for it to be a geomtrical series!
    • Thread Starter
    Offline

    14
    ReputationRep:
    (Original post by Zacken)
    It doesn't need to be a rotation matrix for it to be a geomtrical series!
    Hmm yep ok, that makes sense!. I thought it did have to be a type of matrix like the rotation matrix in that successive powers produce 'linear effects', but I get your point that if we apply slight variation on the normal geometric series proof, we can get the same result independent of the type of matrix we are talking about. I guess you were thinking along the lines of   \mathbf{S}_{n}= \mathbf{I} + \mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m-1}

\Rightarrow \mathbf{S}_{n}- \mathbf{M} \mathbf{S}_{n} = \mathbf{I} + \mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m-1} - (\mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m}) =  \mathbf{I} - \mathbf{M}^{m} 

\Rightarrow \mathbf{S}_{n} = ( \mathbf{I} - \mathbf{M})^{-1}( \mathbf{I} - \mathbf{M}^{m}) .
    Also, just to check in your earlier post you said  \mathbf{S}_{n} =  ( \mathbf{I} - \mathbf{M}^{m})( \mathbf{I} - \mathbf{M})^{-1} . Im probably wrong but does that form only hold if  \mathbf{M} \mathbf{S}_{n} = \mathbf{S}_{n} \mathbf{M} as in the proof above, we have  \mathbf{S}_{n}- \mathbf{M} \mathbf{S}_{n}=... not   \mathbf{S}_{n}- \mathbf{S}_{n} \mathbf{M}=... so the factorisation is different?
    Offline

    22
    ReputationRep:
    (Original post by EnglishMuon)
    Hmm yep ok, that makes sense!. I thought it did have to be a type of matrix like the rotation matrix in that successive powers produce 'linear effects', but I get your point that if we apply slight variation on the normal geometric series proof, we can get the same result independent of the type of matrix we are talking about. I guess you were thinking along the lines of   \mathbf{S}_{n}= \mathbf{I} + \mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m-1}

\Rightarrow \mathbf{S}_{n}- \mathbf{M} \mathbf{S}_{n} = \mathbf{I} + \mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m-1} - (\mathbf{M} + \mathbf{M}^2 + \cdots + \mathbf{M}^{m}) =  \mathbf{I} - \mathbf{M}^{m} 

\Rightarrow \mathbf{S}_{n} = ( \mathbf{I} - \mathbf{M})^{-1}( \mathbf{I} - \mathbf{M}^{m}) .
    Also, just to check in your earlier post you said  \mathbf{S}_{n} =  ( \mathbf{I} - \mathbf{M}^{m})( \mathbf{I} - \mathbf{M})^{-1} . Im probably wrong but does that form only hold if  \mathbf{M} \mathbf{S}_{n} = \mathbf{S}_{n} \mathbf{M} as in the proof above, we have  \mathbf{S}_{n}- \mathbf{M} \mathbf{S}_{n}=... not   \mathbf{S}_{n}- \mathbf{S}_{n} \mathbf{M}=... so the factorisation is different?
    I meant to do your form but I flipped it around because I was rushing out and was typing that as I headed out of the door. Yours is the correct one. Although, it doesn't really matter in this problem because one of the matrices is the zero matrix which is trivially commutative with any other matrix.

    I haven't done the rest of the question 'cause I had to go out, but I'm looking forward to doing the next part in like an hour or so.

    You been doing any other old interesting STEP q's?
    • Thread Starter
    Offline

    14
    ReputationRep:
    (Original post by Zacken)
    I meant to do your form but I flipped it around because I was rushing out and was typing that as I headed out of the door. Yours is the correct one. Although, it doesn't really matter in this problem because one of the matrices is the zero matrix which is trivially commutative with any other matrix.

    I haven't done the rest of the question 'cause I had to go out, but I'm looking forward to doing the next part in like an hour or so.

    You been doing any other old interesting STEP q's?
    Oh yep, thanks for the help! Ive had a look at some of the other qs on that paper too, question 5 is quite nice although the ideas behind it seem fairly straight forwards
    • Thread Starter
    Offline

    14
    ReputationRep:
    Day 24 Summary
    After completing STEP II 2007 today, it turns out I could have narrowly missed out on an S solely by stupid mistakes! (Question specifics may be shown below)
    Consider the question
    By considering the derivatives of  ln(x+ \sqrt{3+x^2} ) and  x \sqrt{3+x^2} , find  \displaystyle\int \sqrt{3+x^2} \ dx
    Its straight forwards to show the log differentiates to  \dfrac{x+ \sqrt{3+x^2}}{3+x^{2}+x \sqrt{3+x^{2}}} and the other root differentiates to  \dfrac{3+2x^{2}}{ \sqrt{3+x^{2}}} .
    I immediately knew that I could rewrite  \displaystyle\int \sqrt{3+x^2} \ dx as  \displaystyle\int \dfrac{3+2x^{2}}{ \sqrt{3+x^{2}}} - \dfrac{x^2}{ \sqrt{3+x^2}} \ dx - the first term of which can be easily integrated using the second deriv. above, but the second was not so obvious. That lead me to think "Maybe my first derivative fraction can simplify to give me the one in the integral?". For some reason, I missed the obvious fact that  \dfrac{x+ \sqrt{3+x^2}}{3+x^{2}+x \sqrt{3+x^{2}}} = \dfrac{x+ \sqrt{3+x^2}}{(x+ \sqrt{3+x^2})( \sqrt{3+x^2})} = \dfrac{1}{ \sqrt{3+x^2}} even though I knew this was exactly what I wanted to get too! It seems like such a dumb mistake, and could have cost me many marks as I ended up using a sub. instead to find the integral which isnt what they asked for. Effectively, If I followed through with my intuition I would definitely have an S in this paper, so I will stick with gut feeling in future!!!
    Offline

    22
    ReputationRep:
    (Original post by EnglishMuon)
    Now for the Group Theory!
    Okay, I've gotten that the identity element is \mathbf{X}_0. I'm unclear as to how you got each element to be its self-inverse though, why exactly is \mathbf{X}_i \star \mathbf{X}_i = \mathbf{X}_0?
 
 
 
  • See more of what you like on The Student Room

    You can personalise what you see on TSR. Tell us a little about yourself to get started.

  • Poll
    What's your favourite Christmas sweets?
    Useful revision links

    Articles

    Writing revision notes

    Our top revision articles

    Tips and advice on making the most of your study time.

    Boomarked book

    Superpowered study

    Take the hard work out of revising with our masterplan.

    Rosette

    Essay expert

    Learn to write like a pro with our ultimate essay guide.

    Can you help? Study Help unanswered threadsStudy Help rules and posting guidelines

    Groups associated with this forum:

    View associated groups
  • See more of what you like on The Student Room

    You can personalise what you see on TSR. Tell us a little about yourself to get started.

  • The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.

    Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE

    Quick reply
    Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.