# S3 help !! Tired and confused :(

Watch
Announcements

Page 1 of 1

Go to first unread

Skip to page:

Hi guys , been trying to do some questions and I have questions regarding what I’ve been looking at

https://postimg.cc/image/524jn22nv/

I don’t understand 18b at all , the answers are exactly the same as 18a expect it has pi multiplied by each interval .

For 18C I got it right and I feel guilty because it was a guess . It’s asking for the probability that ALL the intervals contain mu . Where the confusion is , is that if there’s a 98% probability that the true mean lies within this interval , it’s not a 100 percent guarantee , so even though it’s very very likely the true mean is outside the interval isn’t there a very small , but still a chance nonetheless of the true mean being outside the interval ? So that’s why I’m wondering , why does cubing achieve the answer and how can we be sure that all of the 3 samples have the true mean in the interval , and for instance if one of the samples didn’t contain the true mean why would this be ? Would this be dependent on what the confidence interval is ? So would this occur if it has a smaller confidence interval let’s say 90 percent and the other two confidence intervals were each 99 percent ?

For 19b I have worked it out exactly how the mark scheme shows , but then the mark scheme says my method only gets 2 marks ?? Bit confused here .

Thanks for the help . I really just want to understandthis stuff 👍🏽

https://postimg.cc/image/524jn22nv/

I don’t understand 18b at all , the answers are exactly the same as 18a expect it has pi multiplied by each interval .

For 18C I got it right and I feel guilty because it was a guess . It’s asking for the probability that ALL the intervals contain mu . Where the confusion is , is that if there’s a 98% probability that the true mean lies within this interval , it’s not a 100 percent guarantee , so even though it’s very very likely the true mean is outside the interval isn’t there a very small , but still a chance nonetheless of the true mean being outside the interval ? So that’s why I’m wondering , why does cubing achieve the answer and how can we be sure that all of the 3 samples have the true mean in the interval , and for instance if one of the samples didn’t contain the true mean why would this be ? Would this be dependent on what the confidence interval is ? So would this occur if it has a smaller confidence interval let’s say 90 percent and the other two confidence intervals were each 99 percent ?

For 19b I have worked it out exactly how the mark scheme shows , but then the mark scheme says my method only gets 2 marks ?? Bit confused here .

Thanks for the help . I really just want to understandthis stuff 👍🏽

0

reply

Report

#2

you are not alone in your confusion

https://en.wikipedia.org/wiki/Confidence_interval

Misunderstandings[edit]

See also: § Counter-examplesSee also: Misunderstandings of p-values

Confidence intervals are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.

https://en.wikipedia.org/wiki/Confidence_interval

Misunderstandings[edit]

See also: § Counter-examplesSee also: Misunderstandings of p-values

Confidence intervals are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.

^{[7]}^{[8]}^{[9]}^{[10]}- A 95% confidence interval does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval (i.e., a 95% probability that the interval covers the population parameter).
^{[11]}Once an experiment is done and an interval calculated, this interval either covers the parameter value or it does not; it is no longer a matter of probability. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval.^{[12]}Neyman himself (the original proponent of confidence intervals) made this point in his original paper:^{[3]}

0

reply

(Original post by

you are not alone in your confusion

https://en.wikipedia.org/wiki/Confidence_interval

Misunderstandings[edit]

See also: § Counter-examplesSee also: Misunderstandings of p-values

Confidence intervals are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.

**the bear**)you are not alone in your confusion

https://en.wikipedia.org/wiki/Confidence_interval

Misunderstandings[edit]

See also: § Counter-examplesSee also: Misunderstandings of p-values

Confidence intervals are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.

^{[7]}^{[8]}^{[9]}^{[10]}- A 95% confidence interval does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval (i.e., a 95% probability that the interval covers the population parameter).
^{[11]}Once an experiment is done and an interval calculated, this interval either covers the parameter value or it does not; it is no longer a matter of probability. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval.^{[12]}Neyman himself (the original proponent of confidence intervals) made this point in his original paper:^{[3]}

1

reply

Report

#6

**the bear**)

you are not alone in your confusion

https://en.wikipedia.org/wiki/Confidence_interval

Misunderstandings[edit]

See also: § Counter-examplesSee also: Misunderstandings of p-values

Confidence intervals are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.

^{[7]}

^{[8]}

^{[9]}

^{[10]}

- A 95% confidence interval does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval (i.e., a 95% probability that the interval covers the population parameter).
^{[11]}Once an experiment is done and an interval calculated, this interval either covers the parameter value or it does not; it is no longer a matter of probability. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval.^{[12]}Neyman himself (the original proponent of confidence intervals) made this point in his original paper:^{[3]}

For part (b) of the question, circumference = 2(pi)r, so the confidence interval for the circumference is just 2(pi) that for the radius. (This is just a one-point question).

For part (c), if we accept that the three 98% confidence intervals are calculated from independent random samples, then the probability that any one of them looked at on its own contains mu is 0.98, and the probability that all three of them contain mu is 0.98^3.

2

reply

Report

#7

**the bear**)

you are not alone in your confusion

https://en.wikipedia.org/wiki/Confidence_interval

Misunderstandings[edit]

See also: § Counter-examplesSee also: Misunderstandings of p-values

Confidence intervals are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.

^{[7]}

^{[8]}

^{[9]}

^{[10]}

- A 95% confidence interval does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval (i.e., a 95% probability that the interval covers the population parameter).
^{[11]}Once an experiment is done and an interval calculated, this interval either covers the parameter value or it does not; it is no longer a matter of probability. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval.^{[12]}Neyman himself (the original proponent of confidence intervals) made this point in his original paper:^{[3]}

At a UK school level, probability is interpreted from the “repeated experiment” point of view. I take a coin, I toss it 10,000 times and it comes up heads 5778 times – I’ll then model a single coin toss as a “Bernoulli experiment” with p(head) = 0.5778, and I’ll define a random variable X to be one if we get a head and zero if we get a tail.

Let’s introduce a couple of complications.

The first is the difference between a random variable and a “realization” of a random variable. Before we toss that coin we can quite happily say that it has a probability 0.5778 of coming up heads. After we toss that coin, it is either a head or a tail – there’s no probability involved any more – our random variable has been “realized” and its value is either one or zero.

Take this over to the case of confidence intervals. If I were to repeatedly draw samples from a population and calculate a 95% confidence interval for the mean from each sample, then I would expect 95% of those confidence intervals to enclose the true population mean. But if I actually calculate a confidence interval, it either encloses the population mean or it does not – there is no probability involved any more. When statisticians/probabilists refer to a confidence interval having a 95% probability of enclosing the true population value (and this is one of the things the Wikipedia article misses), they are implicitly referring to the confidence interval as a (two-valued) random variable, and not to the realization of the confidence interval – which either does or does not enclose the true population value. Perhaps we could be more rigorous by saying before the sample is drawn “if you were to draw (future tense) a random sample and calculate a 95% confidence interval, then it would have (future tense) a 95% probability of enclosing the true population mean; after you have calculated it, it either does or does not enclose the true population mean”, but journal editors would go crazy at us for wasting words!

I said there were a couple of complications; here’s the second. Perhaps there’s a difference between the cases where we know, and where we don’t know the true value of something. For example, suppose we have as unbiased coin, and a friend says to us “I’ve written either H or T on a piece of paper and sealed it in an envelope; if you toss that coin and it comes up with the value I wrote down, you win £5, if not, I win £5”. Before I toss the coin, I can quite happily say that my chances of winning are 0.5. I now toss the coin and it comes down as a head. So my coin toss either equals what’s in the envelope, or it does not. But I don’t know which as I have not yet opened the envelope! Can I still say, before opening the envelope, that my chances of winning are 0.5?

This latter complication introduces the idea that probability represents our uncertainty about some state of affairs (and is related to how things develop in the Bayesian formulation of probability). If we swing this over to the example of the confidence interval, recognizing that we do not know the true value of the population mean, then perhaps you’ll see that it might still be reasonable to say that a realized confidence interval has a certain probability of enclosing the true population mean! Going back to the Wikipedia article, one of the things that it doesn’t say is that at the time Neyman was developing the idea of confidence intervals, there was very much a focus towards the “repeated sampling” approach to probability and less towards the “uncertainty” point of view – so it’s not surprising he said what he said!

4

reply

Report

#8

0

reply

(Original post by

Just to offer an alternative view, the Edexcel M3 textbook (2008 syllabus) states "What a 95% confidence interval tells you is that the probability that the interval contains mu is 0.95". I would recommend anyone taking the Edexcel exam to go with this interpretation, even though there may be philosophical hairs to be split elsewhere.

For part (b) of the question, circumference = 2(pi)r, so the confidence interval for the circumference is just 2(pi) that for the radius. (This is just a one-point question).

For part (c), if we accept that the three 98% confidence intervals are calculated from independent random samples, then the probability that any one of them looked at on its own contains mu is 0.98, and the probability that all three of them contain mu is 0.98^3.

**old_engineer**)Just to offer an alternative view, the Edexcel M3 textbook (2008 syllabus) states "What a 95% confidence interval tells you is that the probability that the interval contains mu is 0.95". I would recommend anyone taking the Edexcel exam to go with this interpretation, even though there may be philosophical hairs to be split elsewhere.

For part (b) of the question, circumference = 2(pi)r, so the confidence interval for the circumference is just 2(pi) that for the radius. (This is just a one-point question).

For part (c), if we accept that the three 98% confidence intervals are calculated from independent random samples, then the probability that any one of them looked at on its own contains mu is 0.98, and the probability that all three of them contain mu is 0.98^3.

This was the answer to part B https://postimg.cc/image/5pqvsomjv/

Also for part B why would the answer even be the same as part A ? Are the 2 questions not asking 2 different things . Isn’t part A just a standard confidence interval but part B doesn’t doesn’t ask for the mean or anything - so no confidence interval is required ? Isbthis question not just asking me to find an interval of values which covers 98 percent of the data .

I am going to read Gregorius answer later . So hopefully that will help me to understand more . My exam is in under 4 weeks and I’m petrified to say the least . I’m finding this stuff quite confusing to understand.

0

reply

Report

#10

(Original post by

I must admit, I don’t think that Wikipedia article is particularly helpful at this point, as it doesn’t go on to explain what it’s just said. What it is alluding to concerns problems at the very foundation of what we mean by “probability”.

At a UK school level, probability is interpreted from the “repeated experiment” point of view. I take a coin, I toss it 10,000 times and it comes up heads 5778 times – I’ll then model a single coin toss as a “Bernoulli experiment” with p(head) = 0.5778, and I’ll define a random variable X to be one if we get a head and zero if we get a tail.

Let’s introduce a couple of complications.

The first is the difference between a random variable and a “realization” of a random variable. Before we toss that coin we can quite happily say that it has a probability 0.5778 of coming up heads. After we toss that coin, it is either a head or a tail – there’s no probability involved any more – our random variable has been “realized” and its value is either one or zero.

Take this over to the case of confidence intervals. If I were to repeatedly draw samples from a population and calculate a 95% confidence interval for the mean from each sample, then I would expect 95% of those confidence intervals to enclose the true population mean. But if I actually calculate a confidence interval, it either encloses the population mean or it does not – there is no probability involved any more. When statisticians/probabilists refer to a confidence interval having a 95% probability of enclosing the true population value (and this is one of the things the Wikipedia article misses), they are implicitly referring to the confidence interval as a (two-valued) random variable, and not to the realization of the confidence interval – which either does or does not enclose the true population value. Perhaps we could be more rigorous by saying before the sample is drawn “if you were to draw (future tense) a random sample and calculate a 95% confidence interval, then it would have (future tense) a 95% probability of enclosing the true population mean; after you have calculated it, it either does or does not enclose the true population mean”, but journal editors would go crazy at us for wasting words!

I said there were a couple of complications; here’s the second. Perhaps there’s a difference between the cases where we know, and where we don’t know the true value of something. For example, suppose we have as unbiased coin, and a friend says to us “I’ve written either H or T on a piece of paper and sealed it in an envelope; if you toss that coin and it comes up with the value I wrote down, you win £5, if not, I win £5”. Before I toss the coin, I can quite happily say that my chances of winning are 0.5. I now toss the coin and it comes down as a head. So my coin toss either equals what’s in the envelope, or it does not. But I don’t know which as I have not yet opened the envelope! Can I still say, before opening the envelope, that my chances of winning are 0.5?

This latter complication introduces the idea that probability represents our uncertainty about some state of affairs (and is related to how things develop in the Bayesian formulation of probability). If we swing this over to the example of the confidence interval, recognizing that we do not know the true value of the population mean, then perhaps you’ll see that it might still be reasonable to say that a realized confidence interval has a certain probability of enclosing the true population mean! Going back to the Wikipedia article, one of the things that it doesn’t say is that at the time Neyman was developing the idea of confidence intervals, there was very much a focus towards the “repeated sampling” approach to probability and less towards the “uncertainty” point of view – so it’s not surprising he said what he said!

**Gregorius**)I must admit, I don’t think that Wikipedia article is particularly helpful at this point, as it doesn’t go on to explain what it’s just said. What it is alluding to concerns problems at the very foundation of what we mean by “probability”.

At a UK school level, probability is interpreted from the “repeated experiment” point of view. I take a coin, I toss it 10,000 times and it comes up heads 5778 times – I’ll then model a single coin toss as a “Bernoulli experiment” with p(head) = 0.5778, and I’ll define a random variable X to be one if we get a head and zero if we get a tail.

Let’s introduce a couple of complications.

The first is the difference between a random variable and a “realization” of a random variable. Before we toss that coin we can quite happily say that it has a probability 0.5778 of coming up heads. After we toss that coin, it is either a head or a tail – there’s no probability involved any more – our random variable has been “realized” and its value is either one or zero.

Take this over to the case of confidence intervals. If I were to repeatedly draw samples from a population and calculate a 95% confidence interval for the mean from each sample, then I would expect 95% of those confidence intervals to enclose the true population mean. But if I actually calculate a confidence interval, it either encloses the population mean or it does not – there is no probability involved any more. When statisticians/probabilists refer to a confidence interval having a 95% probability of enclosing the true population value (and this is one of the things the Wikipedia article misses), they are implicitly referring to the confidence interval as a (two-valued) random variable, and not to the realization of the confidence interval – which either does or does not enclose the true population value. Perhaps we could be more rigorous by saying before the sample is drawn “if you were to draw (future tense) a random sample and calculate a 95% confidence interval, then it would have (future tense) a 95% probability of enclosing the true population mean; after you have calculated it, it either does or does not enclose the true population mean”, but journal editors would go crazy at us for wasting words!

I said there were a couple of complications; here’s the second. Perhaps there’s a difference between the cases where we know, and where we don’t know the true value of something. For example, suppose we have as unbiased coin, and a friend says to us “I’ve written either H or T on a piece of paper and sealed it in an envelope; if you toss that coin and it comes up with the value I wrote down, you win £5, if not, I win £5”. Before I toss the coin, I can quite happily say that my chances of winning are 0.5. I now toss the coin and it comes down as a head. So my coin toss either equals what’s in the envelope, or it does not. But I don’t know which as I have not yet opened the envelope! Can I still say, before opening the envelope, that my chances of winning are 0.5?

This latter complication introduces the idea that probability represents our uncertainty about some state of affairs (and is related to how things develop in the Bayesian formulation of probability). If we swing this over to the example of the confidence interval, recognizing that we do not know the true value of the population mean, then perhaps you’ll see that it might still be reasonable to say that a realized confidence interval has a certain probability of enclosing the true population mean! Going back to the Wikipedia article, one of the things that it doesn’t say is that at the time Neyman was developing the idea of confidence intervals, there was very much a focus towards the “repeated sampling” approach to probability and less towards the “uncertainty” point of view – so it’s not surprising he said what he said!

0

reply

Report

#11

(Original post by

Thank you for a great response ! I do understand part C now . For part B they didn’t have a 2 in the answer which is why I was confused .

This was the answer to part B https://postimg.cc/image/5pqvsomjv/

Also for part B why would the answer even be the same as part A ? Are the 2 questions not asking 2 different things . Isn’t part A just a standard confidence interval but part B doesn’t doesn’t ask for the mean or anything - so no confidence interval is required ? Isbthis question not just asking me to find an interval of values which covers 98 percent of the data .

I am going to read Gregorius answer later . So hopefully that will help me to understand more . My exam is in under 4 weeks and I’m petrified to say the least . I’m finding this stuff quite confusing to understand.

**Angels1234**)Thank you for a great response ! I do understand part C now . For part B they didn’t have a 2 in the answer which is why I was confused .

This was the answer to part B https://postimg.cc/image/5pqvsomjv/

Also for part B why would the answer even be the same as part A ? Are the 2 questions not asking 2 different things . Isn’t part A just a standard confidence interval but part B doesn’t doesn’t ask for the mean or anything - so no confidence interval is required ? Isbthis question not just asking me to find an interval of values which covers 98 percent of the data .

I am going to read Gregorius answer later . So hopefully that will help me to understand more . My exam is in under 4 weeks and I’m petrified to say the least . I’m finding this stuff quite confusing to understand.

0

reply

**Gregorius**)

I must admit, I don’t think that Wikipedia article is particularly helpful at this point, as it doesn’t go on to explain what it’s just said. What it is alluding to concerns problems at the very foundation of what we mean by “probability”.

At a UK school level, probability is interpreted from the “repeated experiment” point of view. I take a coin, I toss it 10,000 times and it comes up heads 5778 times – I’ll then model a single coin toss as a “Bernoulli experiment” with p(head) = 0.5778, and I’ll define a random variable X to be one if we get a head and zero if we get a tail.

Let’s introduce a couple of complications.

The first is the difference between a random variable and a “realization” of a random variable. Before we toss that coin we can quite happily say that it has a probability 0.5778 of coming up heads. After we toss that coin, it is either a head or a tail – there’s no probability involved any more – our random variable has been “realized” and its value is either one or zero.

Take this over to the case of confidence intervals. If I were to repeatedly draw samples from a population and calculate a 95% confidence interval for the mean from each sample, then I would expect 95% of those confidence intervals to enclose the true population mean. But if I actually calculate a confidence interval, it either encloses the population mean or it does not – there is no probability involved any more. When statisticians/probabilists refer to a confidence interval having a 95% probability of enclosing the true population value (and this is one of the things the Wikipedia article misses), they are implicitly referring to the confidence interval as a (two-valued) random variable, and not to the realization of the confidence interval – which either does or does not enclose the true population value. Perhaps we could be more rigorous by saying before the sample is drawn “if you were to draw (future tense) a random sample and calculate a 95% confidence interval, then it would have (future tense) a 95% probability of enclosing the true population mean; after you have calculated it, it either does or does not enclose the true population mean”, but journal editors would go crazy at us for wasting words!

I said there were a couple of complications; here’s the second. Perhaps there’s a difference between the cases where we know, and where we don’t know the true value of something. For example, suppose we have as unbiased coin, and a friend says to us “I’ve written either H or T on a piece of paper and sealed it in an envelope; if you toss that coin and it comes up with the value I wrote down, you win £5, if not, I win £5”. Before I toss the coin, I can quite happily say that my chances of winning are 0.5. I now toss the coin and it comes down as a head. So my coin toss either equals what’s in the envelope, or it does not. But I don’t know which as I have not yet opened the envelope! Can I still say, before opening the envelope, that my chances of winning are 0.5?

This latter complication introduces the idea that probability represents our uncertainty about some state of affairs (and is related to how things develop in the Bayesian formulation of probability). If we swing this over to the example of the confidence interval, recognizing that we do not know the true value of the population mean, then perhaps you’ll see that it might still be reasonable to say that a realized confidence interval has a certain probability of enclosing the true population mean! Going back to the Wikipedia article, one of the things that it doesn’t say is that at the time Neyman was developing the idea of confidence intervals, there was very much a focus towards the “repeated sampling” approach to probability and less towards the “uncertainty” point of view – so it’s not surprising he said what he said!

Thanks again

0

reply

Report

#13

(Original post by

Ive only had a chance to read this right now and wanted to say your explanation is fantastic. Its certainly cleared up a lot of confusion and i really appreciate it . I feel like this is what they should explain in the book otherwise students will be confused. I have one question id like to ask , when you said " a realized confidence interval has a certain probability of enclosing the true population mean! I thought that a realised confidence interval , which is found after do we do calculations , not before , either contains or doesnt contain mu , so if something is realised (after calculation ) isnt the probability either one or 0? and isnt there only a probability before any calculations, for example before tossing a tail the probability using an unbiased coin is 0.5 and after tossing its zero or 1 , like you mentioned above ?

Thanks again

**Angels1234**)Ive only had a chance to read this right now and wanted to say your explanation is fantastic. Its certainly cleared up a lot of confusion and i really appreciate it . I feel like this is what they should explain in the book otherwise students will be confused. I have one question id like to ask , when you said " a realized confidence interval has a certain probability of enclosing the true population mean! I thought that a realised confidence interval , which is found after do we do calculations , not before , either contains or doesnt contain mu , so if something is realised (after calculation ) isnt the probability either one or 0? and isnt there only a probability before any calculations, for example before tossing a tail the probability using an unbiased coin is 0.5 and after tossing its zero or 1 , like you mentioned above ?

Thanks again

**knowledge**of certain facts is unknown and we use probability to express the degree of lack of knowledge.

As far as you're concerned, at the level you're studying, you should keep to the explanations I given before that!

0

reply

X

Page 1 of 1

Go to first unread

Skip to page:

### Quick Reply

Back

to top

to top