The Student Room Group

STEP Prep Thread 2020

Scroll to see replies

Original post by casperyc
@Drogo Baggins Just to add another obvious typo,

Capture.PNG


This one's pretty funny 🤣
Can anyone tell me what this actually means? (Screenshot from spec attached)

I remember calculating the mean and variance of knowing that of X for discrete and continuous distributions but I don't ever recall using a cdf to work out a pdf of a related variable. How is this even done?
(edited 4 years ago)
Original post by Spookyayu
Can anyone tell me what this actually means? (Screenshot from spec attached)

I remember calculating the mean and variance of knowing that of X for discrete and continuous distributions but I don't ever recall using a cdf to work out a pdf of a related variable. How is this even done?


A cumulative distribution function is something like: FX(x)=P(Xx)F_X(x) = P(X \le x). If you know this, what can you say about, for example, G(y)=P(X2y)G(y) = P(X^2 \le y) ?
Original post by Spookyayu
Can anyone tell me what this actually means? (Screenshot from spec attached)

I remember calculating the mean and variance of knowing that of X for discrete and continuous distributions but I don't ever recall using a cdf to work out a pdf of a related variable. How is this even done?


Suppose XX is a random variable with probability density function f(x)f(x) and cumulative distribution function F(x)F(x), such that ddxF(x)=f(x)\dfrac{d}{dx}F(x) = f(x).

Introduce a random variable Y=γ(X)Y = \gamma(X) where γ\gamma is some nice function with the required properties to justify what I am about to write (i.e. inverse exists, monoticity, etc...), and it represents a related random variable YY to XX through a function γ\gamma. The cumulative distribution function of this is

G(y)=P[Yy]=P[γ(X)y]=P[Xγ1(y)]=F[γ1(y)]G(y) = P[Y \leq y] = P[\gamma(X) \leq y] = P[X \leq \gamma^{-1} (y)] = F[\gamma^{-1}(y)]

so we have related G to F in some way, and so to obtain the density function you just differentiate with respect to y;

g(y)=d[γ1(y)]dyf[γ1(y)]g(y) = \dfrac{d [\gamma^{-1} (y)]}{dy} f[\gamma^{-1}(y)]
Original post by RDKGames
Suppose XX is a random variable with probability density function f(x)f(x) and cumulative distribution function F(x)F(x), such that ddxF(x)=f(x)\dfrac{d}{dx}F(x) = f(x).

Introduce a random variable Y=γ(X)Y = \gamma(X) where γ\gamma is some nice function with the required properties to justify what I am about to write (i.e. inverse exists, monoticity, etc...), and it represents a related random variable YY to XX through a function γ\gamma. The cumulative distribution function of this is

G(y)=P[Yy]=P[γ(X)y]=P[Xγ1(y)]=F[γ1(y)]G(y) = P[Y \leq y] = P[\gamma(X) \leq y] = P[X \leq \gamma^{-1} (y)] = F[\gamma^{-1}(y)]

so we have related G to F in some way, and so to obtain the density function you just differentiate with respect to y;

g(y)=d[γ1(y)]dyf[γ1(y)]g(y) = \dfrac{d [\gamma^{-1} (y)]}{dy} f[\gamma^{-1}(y)]


Correct, but be aware that some of the simple examples that might be asked do not meet your definition of a "nice" function!
Original post by Gregorius
Correct, but be aware that some of the simple examples that might be asked do not meet your definition of a "nice" function!


Yes, I'd imagine the asker can see that they would need to most likely consider different arising cases for a complicated function, then apply this theory to each one.
Original post by deathbySTEP
This one's pretty funny 🤣

Why is it funny?
Original post by casperyc
Why is it funny?


Of all the possible maths typos ...

1=0

xD
Original post by deathbySTEP
Of all the possible maths typos ...

1=0

xD

LOL - maybe Dr Siklos would refuse to admit it's a typo
Original post by RDKGames
Suppose XX is a random variable with probability density function f(x)f(x) and cumulative distribution function F(x)F(x), such that ddxF(x)=f(x)\dfrac{d}{dx}F(x) = f(x).

Introduce a random variable Y=γ(X)Y = \gamma(X) where γ\gamma is some nice function with the required properties to justify what I am about to write (i.e. inverse exists, monoticity, etc...), and it represents a related random variable YY to XX through a function γ\gamma. The cumulative distribution function of this is

G(y)=P[Yy]=P[γ(X)y]=P[Xγ1(y)]=F[γ1(y)]G(y) = P[Y \leq y] = P[\gamma(X) \leq y] = P[X \leq \gamma^{-1} (y)] = F[\gamma^{-1}(y)]

so we have related G to F in some way, and so to obtain the density function you just differentiate with respect to y;

g(y)=d[γ1(y)]dyf[γ1(y)]g(y) = \dfrac{d [\gamma^{-1} (y)]}{dy} f[\gamma^{-1}(y)]

I'm trying this with a given pdf for X and trying to work out the pdf for X². In the end I'm left with a function in terms of √y. At this point, do I just substitute in x = √y ?

Picture attached
Original post by Spookyayu
I'm trying this with a given pdf for X and trying to work out the pdf for X². In the end I'm left with a function in terms of √y. At this point, do I just substitute in x = √y ?

Picture attached

Yep :smile: y is just a dummy variable
Maybe I chose an unfortunate example because the pdf of in the case I chose ends up being sinx/4x which can't be integrated back in terms of x without using the Si(x) function which I'm not familiar with or should be aware of for STEP. So I am unable to determine if the area of this function is still 1 when evaluated between 0 and pi or if it has new bounds it needs to be evaluated from such that the area under the described pdf is still 1.

(Using a computer approximation, it outputs 0.463 as the area between 0 and pi - have I done something wrong in carrying out the process?)
Original post by Spookyayu
Maybe I chose an unfortunate example because the pdf of in the case I chose ends up being sinx/4x which can't be integrated back in terms of x without using the Si(x) function which I'm not familiar with or should be aware of for STEP. So I am unable to determine if the area of this function is still 1 when evaluated between 0 and pi or if it has new bounds it needs to be evaluated from such that the area under the described pdf is still 1.

(Using a computer approximation, it outputs 0.463 as the area between 0 and pi - have I done something wrong in carrying out the process?)


You should integrate your pdf result with sqrt(y) in it between 0 and pi^2. See if you can spot your error.
Original post by casperyc
@Drogo Baggins Just to add another obvious typo,

Capture.PNG


Thanks! What I don't understand is that 1 and 0 are not that close on a keyboard?
Original post by RDKGames
You should integrate your pdf result with sqrt(y) in it between 0 and pi^2. See if you can spot your error.

Seems like I get 1 with those bounds:
IMG_20200129_144632.jpg
Maybe this means that the bounds of is that of X squared (would make sense I guess).

I feel like I've made some sort of mistake neglecting the fact that there could be a negative number for the square root as well such that if P(X²≤y) then this is the same as P(-√y≤X≤√y) but I don't know to what extent this has affected my answer (if it has made any difference at all)
Original post by Spookyayu
Can anyone tell me what this actually means? (Screenshot from spec attached)

I remember calculating the mean and variance of knowing that of X for discrete and continuous distributions but I don't ever recall using a cdf to work out a pdf of a related variable. How is this even done?


There is an example in the topic notes for the STEP 3 stats and probability module. These can be found here: https://maths.org/step/step-3-statistics-updated
Original post by Spookyayu
Seems like I get 1 with those bounds:

Maybe this means that the bounds of is that of X squared (would make sense I guess).

I feel like I've made some sort of mistake neglecting the fact that there could be a negative number for the square root as well such that if P(X²≤y) then this is the same as P(-√y≤X≤√y) but I don't know to what extent this has affected my answer (if it has made any difference at all)


Look at the sub you did... If you want to integrate with respect to x (which is what you were doing before) then you need to relate x with y somehow ... But we already have the relation!

x^2 = y

So your error in integration going into Si(x) was to do with the fact that you were neglecting a factor of dy/dx in the integrand which comes from using the substitution.

Anyway, you are allowed to take the positive root when square rooting because you know that x is positive.

If it was negative, you would need the negative sign on the root.

If x is a mixture, then it gets a bit more complicated and you need to consider both cases.
Original post by Drogo Baggins
Thanks! What I don't understand is that 1 and 0 are not that close on a keyboard?

You've got a covert logician in your ranks...
Original post by RDKGames
Look at the sub you did... If you want to integrate with respect to x (which is what you were doing before) then you need to relate x with y somehow ... But we already have the relation!

x^2 = y

So your error in integration going into Si(x) was to do with the fact that you were neglecting a factor of dy/dx in the integrand which comes from using the substitution.

Anyway, you are allowed to take the positive root when square rooting because you know that x is positive.

If it was negative, you would need the negative sign on the root.

If x is a mixture, then it gets a bit more complicated and you need to consider both cases.

Oh I see, that does make a lot of sense. I also now understand that if Y = m(X) where m is some function and the bounds of X is a<x<b then the bounds of Y is m(a)<y<m(b). What I'm unsure about is if I'm allowed to leave the final pdf answer in terms of √y in the example I tried or if I still have to put it back in terms of x.
Original post by Drogo Baggins
Thanks! What I don't understand is that 1 and 0 are not that close on a keyboard?


Look at the small keypad?

Quick Reply

Latest

Trending

Trending