The Student Room Group

Edexcel S3 - Wednesday 25th May AM 2016

Scroll to see replies

Original post by L'Evil Wolf
How did you get the confidence interval stuff, still having trouble.

Are solomon easier than edexcel do you think?


To be honest, aside from some combinations of variables trouble I had yesterday (squaring not squaring etc), I got all of S3 on my first run-through in like 5 hours. Confidence intervals just sort of make sense if you think about the concept.

Error we can assume is normally distributed. It's value depends on how big of a sample we take, so bigger sample means smaller error. The interval depends on what percentage (for 90% this is 5% either side) we don't want to concern ourselves with on either side of the true mean so we can use normal tables to find that corresponding Z value (as error is normally distributed).

That's probably a patchy explanation in terms of the actual statistics but it's how I reasoned with the idea.
Original post by Euclidean
To be honest, aside from some combinations of variables trouble I had yesterday (squaring not squaring etc), I got all of S3 on my first run-through in like 5 hours. Confidence intervals just sort of make sense if you think about the concept.

Error we can assume is normally distributed. It's value depends on how big of a sample we take, so bigger sample means smaller error. The interval depends on what percentage (for 90% this is 5% either side) we don't want to concern ourselves with on either side of the true mean so we can use normal tables to find that corresponding Z value (as error is normally distributed).

That's probably a patchy explanation in terms of the actual statistics but it's how I reasoned with the idea.


Interesting thanks. Someone explained this to me very well:

The way I learnt it was using bottles of wine. Let's say you have 1 massive bottle of wine (worth 12 small bottles in volume), and 12 small, individual bottles of wine. You'd expect on average the small bottles to vary naturally , some are higher than the mean, some are lower than the mean so these differences cancel each other out so it has a smaller variance. Using the formula it would be 12Var(X). However with the large bottle of wine, the variance is amplified as you are taking 1 bottle then multiplying it by 12, using the formula it will be 144Var(X). Hope you can understand it.
Original post by L'Evil Wolf
its not even that hard though and doesnt even cover much marks in exam.

But it belongs in Science and Statistics


Thats why science and number crunching statistics is ****. I don't have time to learn words.
Geometry will rule the world.


Posted from TSR Mobile
Original post by physicsmaths
Thats why science and number crunching statistics is ****. I don't have time to learn words.
Geometry will rule the world.


Posted from TSR Mobile


Euclidean approves
Original post by physicsmaths
Thats why science and number crunching statistics is ****. I don't have time to learn words.
Geometry will rule the world.


Posted from TSR Mobile


Yeah I agree, statistics after a while becomes the same regurgitated content. Wilcoxon would have been fun to do instead of sampling aha :l
Is simple random sampling (i.e. random number sampling/lottery sampling) not suitable for when the POPULATION or SAMPLE is large? The book says population, an old mark scheme said sample, baxter's notes say sample - which one is it?!
Original post by Nikhilm
Is simple random sampling (i.e. random number sampling/lottery sampling) not suitable for when the POPULATION or SAMPLE is large? The book says population, an old mark scheme said sample, baxter's notes say sample - which one is it?!


Both, but the sample is more important.

Imagine trying to assign 10,000 people a number with no overlaps. But then imagine trying to sample 1,000 people from that population with random numbers. Pain in the a** if you ask me :lol:
Can some one explain degrees of freedom. Have no clue what the book is going on about. Ch1-3 and Ch5 are easy. Ch4 is a nightmare.


Posted from TSR Mobile
Original post by physicsmaths
Can some one explain degrees of freedom. Have no clue what the book is going on about. Ch1-3 and Ch5 are easy. Ch4 is a nightmare.


Posted from TSR Mobile


Basically it's just a measure of how much information you need to know before you can work the rest out

With a goodness of fit test (checking if a distribution is a good model) it's v (degrees of freedom) = n - 1 where n is the number of columns (data points) you have, but if you use the data to calculate something unknown like p in a binomial model you have to take another one away so it becomes v = n - 2 or v = n - 3 if you're doing a normal test and need to estimate suitable mu and sigma from data

With contingency tables the degrees of freedom v = (r-1)(c-1) where r and c are the number of rows and columns respectively

Degrees of freedom alongside significance level determine the critical value of the chi square statistic which would lead to a rejection of the null hypothesis, have a look at the formula booklet if you haven't already. You'll see what I mean
(edited 7 years ago)
Anyone know where the past papers start from for our spec? I've printed all the ones from 2007, there weren't any Jan papers were there?
Also, the book literally says that stratified sampling is used when the sample is large. But then in EX2B Q4c, it is a disadvantage of stratified sampling if the sample size is large. Lol, so which is it?
Original post by paradoxequation
Also, the book literally says that stratified sampling is used when the sample is large. But then in EX2B Q4c, it is a disadvantage of stratified sampling if the sample size is large. Lol, so which is it?


The first definitely, The second point doesn't make sense.
Will we be expected to apply continuity corrections in s3 questions?
Any hard non-edexcel papers? @Zacken
(edited 7 years ago)
Original post by L'Evil Wolf
Will we be expected to apply continuity corrections in s3 questions?


Not for anything concerning samples because, if n is large, by CLT you just assume its normally distributed, so no correction required. Obvs it would be required for an S2 type q, but that's highly unlikely to come up in S3
Original post by L'Evil Wolf
Will we be expected to apply continuity corrections in s3 questions?


If we combine random variables from different distributions

Posted from TSR Mobile
Original post by Euclidean
If we combine random variables from different distributions

Posted from TSR Mobile


Original post by Nikhilm
Not for anything concerning samples because, if n is large, by CLT you just assume its normally distributed, so no correction required. Obvs it would be required for an S2 type q, but that's highly unlikely to come up in S3



Thanks yes I see. Have you done a edexcel s3 paper where they required this I did a similar one in madas.


Also to how many dp/sf do we give the confidence interval linits too - 2.dp./3s.f. etc?
Original post by Euclidean
If we combine random variables from different distributions

Posted from TSR Mobile


Also is Ex17 in chapter 3, wrong?

It should be 0.0334 in each tail...
Original post by L'Evil Wolf
Thanks yes I see. Have you done a edexcel s3 paper where they required this I did a similar one in madas.


Also to how many dp/sf do we give the confidence interval linits too - 2.dp./3s.f. etc?


I do 4 significant figures normally, but I'm honestly not too sure
Original post by Euclidean
I do 4 significant figures normally, but I'm honestly not too sure


Ok thank you anyway :smile:

I do this in chapter 5 stuff :smile:

Quick Reply

Latest

Trending

Trending