The purpose of this thread is to put some of the more useful tips and tricks people have found doing STEP / AEA questions (or elsewhere) into one place.

If enough gets posted to make it worthwhile, I'll try to make this post into some kind of "index", similarly to how others have done the solutions threads.

If enough gets posted to make it worthwhile, I'll try to make this post into some kind of "index", similarly to how others have done the solutions threads.

Scroll to see replies

Trig: If you only learn one identity for STEP, make it this one:

$2 \cos \theta (\cos \theta + i \sin \theta) = 1 + \cos 2\theta + i \sin 2\theta$

In particular, note that taking the modulus gives $2 \cos \theta = \sqrt{(1+\cos 2\theta)^2+ \sin^2 2\theta}$.

Equivalently, $2 \cos \frac{\theta}{2} = \sqrt{(1+\cos \theta)^2+ \sin^2 \theta}$, which comes up a lot when calculating the distance between two points on the circumference of a circle.

$2 \cos \theta (\cos \theta + i \sin \theta) = 1 + \cos 2\theta + i \sin 2\theta$

In particular, note that taking the modulus gives $2 \cos \theta = \sqrt{(1+\cos 2\theta)^2+ \sin^2 2\theta}$.

Equivalently, $2 \cos \frac{\theta}{2} = \sqrt{(1+\cos \theta)^2+ \sin^2 \theta}$, which comes up a lot when calculating the distance between two points on the circumference of a circle.

Great idea for a thread David.

This trick is particularly useful in summing trigonometric series or any other situations which involve extracting real or imaginary parts out of a complex expression.

$\\ \Re{(1+e^{2i\theta})^m}\\=\Re{(e^{i\theta})^{m}(e^{-i\theta}+e^{i\theta})^m}\\ =\Re{(e^{mi\theta})(2^{m}\cos^m{\theta}))}\\=2^{m}\cos^m{\theta}\cos{m\theta}$

For the case of $\Re{(e^{i\theta}+e^{2i\theta}+\dots+e^{ni\theta})}$ you can do similiar factorisation, which simplifies the algebra considerably.

This useful summation $\displaystyle \sum_{k=1}^{n}{\cos{k\theta}}$ isn't difficult to memorise or derive if you want to now. This trick seems to be used quite often in the solutions provided by Siklos.

This trick is particularly useful in summing trigonometric series or any other situations which involve extracting real or imaginary parts out of a complex expression.

$\\ \Re{(1+e^{2i\theta})^m}\\=\Re{(e^{i\theta})^{m}(e^{-i\theta}+e^{i\theta})^m}\\ =\Re{(e^{mi\theta})(2^{m}\cos^m{\theta}))}\\=2^{m}\cos^m{\theta}\cos{m\theta}$

For the case of $\Re{(e^{i\theta}+e^{2i\theta}+\dots+e^{ni\theta})}$ you can do similiar factorisation, which simplifies the algebra considerably.

Unparseable latex formula:

\\ \displaystyle \Re{\frac{e^{ni\theta}-1}{e^{i\theta}-1}}\\ =\Re{\frac{e^{\frac{ni\theta}{2}}(e^{\frac{ni \theta}{2}}-e^{\frac{-ni\theta}{2}})}{e^{\frac{i\theta}{2}}(e^{\frac{i \theta}{2}}-e^{\frac{-i\theta}{2}})}}\\ =\Re{(e^{(\frac{n-1}{2})i\theta}(\frac{2i\sin{\frac{n\theta}{2}}}{2i\sin{\frac{\theta}{2}}}))}\\ =\frac{\cos{(\frac{n-1}{2}\theta)}\sin{(\frac{n\theta}{2})}}{\sin{ (\frac{\theta}{2})}}}

This useful summation $\displaystyle \sum_{k=1}^{n}{\cos{k\theta}}$ isn't difficult to memorise or derive if you want to now. This trick seems to be used quite often in the solutions provided by Siklos.

I am not sure what they are what David is looking for, but they may provide a small help, so here are my notes which I have taken from the mark schemes and my own practise:

General Advice

- The examination requires intensity and persistence.

- Questions require a systematic approach.

- Checking will improve the work of many candidates.

- The fluent, confident and correct handling of mathematical symbols is necessary and expected.

- Set out a well-structured answer.

- Sometimes a fresh start to a question is needed.

- Sound technique is necessary, and checking required.

- Working to be legible.

- Aim for thoughtful and well set-out work.

- Arithmetic and algebraic accuracy would most improve marks.

- It is not a good idea to plunge into the algebra without thinking about alternative methods.

Specific Advice

- Using abbreviations can save a great deal of writing

- The parts of a question are often linked together, but sometimes with slightly modifications.

- To show a statement is true, give a formal proof; to show one is false, give a (if possible, simple) counterexample.

- It doesn't matter if you start from the given answer and work backwards - it is still a mathematical proof and any proof will get the marks.

- A geometric understanding of modulus questions can help when examining the different cases.

General Advice

- The examination requires intensity and persistence.

- Questions require a systematic approach.

- Checking will improve the work of many candidates.

- The fluent, confident and correct handling of mathematical symbols is necessary and expected.

- Set out a well-structured answer.

- Sometimes a fresh start to a question is needed.

- Sound technique is necessary, and checking required.

- Working to be legible.

- Aim for thoughtful and well set-out work.

- Arithmetic and algebraic accuracy would most improve marks.

- It is not a good idea to plunge into the algebra without thinking about alternative methods.

Specific Advice

- Using abbreviations can save a great deal of writing

- The parts of a question are often linked together, but sometimes with slightly modifications.

- To show a statement is true, give a formal proof; to show one is false, give a (if possible, simple) counterexample.

- It doesn't matter if you start from the given answer and work backwards - it is still a mathematical proof and any proof will get the marks.

- A geometric understanding of modulus questions can help when examining the different cases.

Something different, that I've seen on the old CCE but not sure if I have in STEP:

$\displaystyle \sum_{k=0}^r \binom{n}{k} \binom{m}{r-k} = \binom{n+m}{r}$

Proof: consider the coefficient of $x^r$ in the identity $(1+x)^n(1+x)^m = (1+x)^{n+m}$

The most famous case is when n=m=r: $\displaystyle \sum_{k=0}^n \binom{n}{k}^2 = \binom{2n}{n}$

$\displaystyle \sum_{k=0}^r \binom{n}{k} \binom{m}{r-k} = \binom{n+m}{r}$

Proof: consider the coefficient of $x^r$ in the identity $(1+x)^n(1+x)^m = (1+x)^{n+m}$

The most famous case is when n=m=r: $\displaystyle \sum_{k=0}^n \binom{n}{k}^2 = \binom{2n}{n}$

Quick and dirty introduction to solving simple recurrence relations:

To solve a recurrence relation of the form $au_{n+2}+bu_{n+1}+cu_n = 0$, assume a solution of form $u_n = \lambda^n$ and deduce $a\lambda^2+b\lambda+c = 0 \quad(\S)$.

You end up with a general solution $u_n = A \lambda_1^n + B \lambda_2^n$ where $\lambda_1, \lambda_2$ are the roots of the quadratic $(\S)$. (If the roots are repeated, the general solution takes the form $u_n = (A+Bn)\lambda_1^n$).

To solve $au_{n+2}+bu_{n+1}+cu_n = f(n)$, first look for a particular solution, typically of the form $u_n = Cf(n)$, and then make it general by adding solutions to $au_{n+2}+bu_{n+1}+cu_n = 0$. (All analogous to what you'd do for a linear diff equation).

To solve a recurrence relation of the form $au_{n+2}+bu_{n+1}+cu_n = 0$, assume a solution of form $u_n = \lambda^n$ and deduce $a\lambda^2+b\lambda+c = 0 \quad(\S)$.

You end up with a general solution $u_n = A \lambda_1^n + B \lambda_2^n$ where $\lambda_1, \lambda_2$ are the roots of the quadratic $(\S)$. (If the roots are repeated, the general solution takes the form $u_n = (A+Bn)\lambda_1^n$).

To solve $au_{n+2}+bu_{n+1}+cu_n = f(n)$, first look for a particular solution, typically of the form $u_n = Cf(n)$, and then make it general by adding solutions to $au_{n+2}+bu_{n+1}+cu_n = 0$. (All analogous to what you'd do for a linear diff equation).

DFranklin

Quick and dirty introduction to solving simple recurrence relations:

To solve a recurrence relation of the form $au_{n+2}+bu_{n+1}+cu_n = 0$, assume a solution of form $u_n = \lambda^n$ and deduce $a\lambda^2+b\lambda+c = 0 \quad(\S)$.

You end up with a general solution $u_n = A \lambda_1^n + B \lambda_2^n$ where $\lambda_1, \lambda_2$ are the roots of the quadratic $(\S)$. (If the roots are repeated, the general solution takes the form $u_n = (A+Bn)\lambda_1^n$).

To solve $au_{n+2}+bu_{n+1}+cu_n = f(n)$, first look for a particular solution, typically of the form $u_n = Cf(n)$, and then make it general by adding solutions to $au_{n+2}+bu_{n+1}+cu_n = 0$. (All analogous to what you'd do for a linear diff equation).

To solve a recurrence relation of the form $au_{n+2}+bu_{n+1}+cu_n = 0$, assume a solution of form $u_n = \lambda^n$ and deduce $a\lambda^2+b\lambda+c = 0 \quad(\S)$.

You end up with a general solution $u_n = A \lambda_1^n + B \lambda_2^n$ where $\lambda_1, \lambda_2$ are the roots of the quadratic $(\S)$. (If the roots are repeated, the general solution takes the form $u_n = (A+Bn)\lambda_1^n$).

To solve $au_{n+2}+bu_{n+1}+cu_n = f(n)$, first look for a particular solution, typically of the form $u_n = Cf(n)$, and then make it general by adding solutions to $au_{n+2}+bu_{n+1}+cu_n = 0$. (All analogous to what you'd do for a linear diff equation).

Interesting, I did mean to look this up some time. Anyone know where I can find any practice questions on this? Also, can we legitimately assume there's a solution in the form u_n = (lambda)^n in STEP?

Statistics: Indicator functions and Expectation

All an indicator function is is a function that is 1 when an event happens and 0 when it doesn't. What's neat about indicator functions is that you can often use them to break a more complicated function down into something almost trivial.

E.g. Suppose $X \sim B(n, p)$ and we want to prove the formulas for the mean and variance of X.

Instead of doing calculations based on finding $\sum_0^n k\binom{n}{k} p^k(1-p)^{n-k}$ etc, define the indicator function $I_k$ to be 1 if the k'th trial is a success, 0 otherwise.

Then $E(I_k) = p, Var(I_k)=E(I_k^2)-E(I_k)^2 = p-p^2=p(1-p)$.

But $X = \sum_1^n I_k$, so $E(X) = \sum_1^n I_k = np$. (note that we can get this far without requiring the $I_k$ to be independent, which is often very useful).

Since the $I_k$ are independent, we also have $Var(\sum I_k) = \sum Var(I_k)$, so $Var(X) = \sum_1^n Var(I_k) = np(1-p)$.

All an indicator function is is a function that is 1 when an event happens and 0 when it doesn't. What's neat about indicator functions is that you can often use them to break a more complicated function down into something almost trivial.

E.g. Suppose $X \sim B(n, p)$ and we want to prove the formulas for the mean and variance of X.

Instead of doing calculations based on finding $\sum_0^n k\binom{n}{k} p^k(1-p)^{n-k}$ etc, define the indicator function $I_k$ to be 1 if the k'th trial is a success, 0 otherwise.

Then $E(I_k) = p, Var(I_k)=E(I_k^2)-E(I_k)^2 = p-p^2=p(1-p)$.

But $X = \sum_1^n I_k$, so $E(X) = \sum_1^n I_k = np$. (note that we can get this far without requiring the $I_k$ to be independent, which is often very useful).

Since the $I_k$ are independent, we also have $Var(\sum I_k) = \sum Var(I_k)$, so $Var(X) = \sum_1^n Var(I_k) = np(1-p)$.

generalebriety

Interesting, I did mean to look this up some time. Anyone know where I can find any practice questions on this?

Also, can we legitimately assume there's a solution in the form u_n = (lambda)^n in STEP?

The more difficult problem is proving that not only have you found a valid solution, but that your solution is in fact the only possible one. Under exam conditions, I wouldn't bother about this (unless you are in the situation where you can't find anything better to do or the question seems particularly strict in its language).

But if you do have to prove it, suppose you have 2 solutions u_n, v_n with u_0 = v_0, u_1 = v_1. Put w = u-v, then w also satisfies the difference equation, but w_0 = w_1 = 0. But then $w_2 = (bw_1+cw_0)/a = 0$ and then $w_3 = (bw_2+cw_1)/a = 0$, etc... (use induction if you want to be really formal).

DFranklin

I suppose my question is more along the lines of: why does there always exist such a solution? Sorry, I don't mean to hijack this thread or anything, it's just something I've wondered for a while. What if there isn't such a solution? Then you'd still get an answer out of it which would satisfy the first couple of terms in the recurrence relation but would inevitably fail elsewhere.

generalebriety

I suppose my question is more along the lines of: why does there always exist such a solution? Sorry, I don't mean to hijack this thread or anything, it's just something I've wondered for a while. What if there isn't such a solution? Then you'd still get an answer out of it which would satisfy the first couple of terms in the recurrence relation but would inevitably fail elsewhere.

It does get a lot more dodgy when you have a repeated root and want to claim there's a solution of the form $(A+Bn)\lambda^n$. This is tricky to justify (more tricky than I think is justified discussing in detail here - I can't see it being expected for STEP!). But you can do it by basically "factoring" the difference equation into linear terms, and solving each term in sequence, with the solution to each term becoming the inhomogenous part on the right hand side of the next.

Edit: decided I might as well do an example for the repeated root case; I did get asked it at interview (for a differential equation, not a difference equation), so it is the kind of question that could happen (though I'm certain they'd give some hints. I certainly got stuck on it!).

Suppose we want the general solution of $u_{n+2} -4u_{n+1}+4u_n = 0$. Factor the aux eqn as $x^2-4x+4 = (x-2)(x-2)$. Consider $v_n = u_{n+1}-2u_n$ (corresponding to (x-2)). Then $v_{n+1}-2v_n = 0$, from which we deduce $v_n = A 2^n$. Then $u_{n+1}-2u_n = A 2^n$ and we deduce $u_n = B^n + (nA2^n)/2$ (where there's still a certain amount of "pulling a rabbit out of a hat" in terms of guessing the form of the solution, but it is much easier to justify for this case where $u_{n+1}$ depends only on $u_n$ and n).

Ah, don't see why I didn't think of it like that. Alright, thanks. I'll stop hijacking your thread now, and maybe come back with some advice of my own later.

Proof of Cauchy-Schwarz Inequality

There are various forms of this, I'll just do one:

Prove that $\left(\int f(x)g(x)dx\right)^2 \le \int f^2(x) dx \int g^2(x) dx$.

Consider $I(\lambda) = \int (f(x)+\lambda g(x))^2 dx$. Then $I(\lambda) \ge 0$ for all $\lambda$.

Expand out and we get

$I(\lambda) = \lambda^2 \underbrace{\int f^2(x) dx}_A + 2\lambda \underbrace{\int f(x)g(x) dx}_B + \underbrace{\int g^2(x) dx}_C$

$I(\lambda) = A\lambda^2 + 2B \lambda + C$. But then since $I(\lambda)\ge 0$ for all $\lambda$, we must have $(2B)^2 \le 4AC$, so $B^2 \le AC$. Hence result.

For interest, there's equality iff $I(\lambda) = 0$ for some value of $\lambda$. In other words, g is just a multiple of f (or f = 0 and g is anything you like).

Exactly the same proof shows $\left(\sum a_n b_n\right)^2 \le \sum a_n^2 \sum b_n^2$ etc.

There are various forms of this, I'll just do one:

Prove that $\left(\int f(x)g(x)dx\right)^2 \le \int f^2(x) dx \int g^2(x) dx$.

Consider $I(\lambda) = \int (f(x)+\lambda g(x))^2 dx$. Then $I(\lambda) \ge 0$ for all $\lambda$.

Expand out and we get

$I(\lambda) = \lambda^2 \underbrace{\int f^2(x) dx}_A + 2\lambda \underbrace{\int f(x)g(x) dx}_B + \underbrace{\int g^2(x) dx}_C$

$I(\lambda) = A\lambda^2 + 2B \lambda + C$. But then since $I(\lambda)\ge 0$ for all $\lambda$, we must have $(2B)^2 \le 4AC$, so $B^2 \le AC$. Hence result.

For interest, there's equality iff $I(\lambda) = 0$ for some value of $\lambda$. In other words, g is just a multiple of f (or f = 0 and g is anything you like).

Exactly the same proof shows $\left(\sum a_n b_n\right)^2 \le \sum a_n^2 \sum b_n^2$ etc.

DFranklin

Something different, that I've seen on the old CCE but not sure if I have in STEP:

$\displaystyle \sum_{k=0}^r \binom{n}{k} \binom{m}{r-k} = \binom{n+m}{r}$

Proof: consider the coefficient of $x^r$ in the identity $(1+x)^n(1+x)^m = (1+x)^{n+m}$

The most famous case is when n=m=r: $\displaystyle \sum_{k=0}^n \binom{n}{k}^2 = \binom{2n}{n}$

$\displaystyle \sum_{k=0}^r \binom{n}{k} \binom{m}{r-k} = \binom{n+m}{r}$

Proof: consider the coefficient of $x^r$ in the identity $(1+x)^n(1+x)^m = (1+x)^{n+m}$

The most famous case is when n=m=r: $\displaystyle \sum_{k=0}^n \binom{n}{k}^2 = \binom{2n}{n}$

Quite a nice way of thinking about these questions is by noting that the binomial coefficient "n choose k" is exactly that - the number of ways of choosing k objects from n objects.

So we can prove the first identity above by noting that the RHS is the number of ways of picking r players from 2 teams (one of size m, the other of size n).

But we can do this by picking 0 players from the n-team, and r from the m-team,

or by picking 1 player from the n-team and the other r-1 from the m-team,

...

or by picking r players from the n team and none from the m-team.

This is exactly the LHS. Infact, this kind of thinking might be helpful on other questions, particularly in the probability section.

zooop

I am sitting the AEA this time round and I haven't seem to have studied half the things that have been mentioned above! It makes me a little worried! Will I need to know half of what has been previously said? Thanks

- Edexcel maths AEA 2024
- Edexcel Maths AEA 2023
- i really want to get 7s and 8s in gcse but got 2 moths 5s to 7/8s is that possible
- TMUA for Maths students?
- how to go from a grade5/6 in maths to a grade 7 in a month for gcses. please help
- Merit in the AEA
- Advice for getting A*s in A levels
- AEA maths?
- What are the best places online for maths revision and why?
- CIE A-Level Law
- Ace your exams without stress
- GYG Asks... What do you love about TSR?
- How to get a* at a level bio chem and maths
- University Of Suffolk New Arrival
- Going into Year 11 thoughts ..
- How do I revise for French Writing (higher)?
- Physics A-level
- Idk how to revise for my History A level Mocks
- tips
- Study Techniques

Latest

Trending