Angels1234
Badges: 14
Rep:
?
#1
Report Thread starter 3 years ago
#1
Hi guys , been trying to do some questions and I have questions regarding what I’ve been looking at

https://postimg.cc/image/524jn22nv/

I don’t understand 18b at all , the answers are exactly the same as 18a expect it has pi multiplied by each interval .

For 18C I got it right and I feel guilty because it was a guess . It’s asking for the probability that ALL the intervals contain mu . Where the confusion is , is that if there’s a 98% probability that the true mean lies within this interval , it’s not a 100 percent guarantee , so even though it’s very very likely the true mean is outside the interval isn’t there a very small , but still a chance nonetheless of the true mean being outside the interval ? So that’s why I’m wondering , why does cubing achieve the answer and how can we be sure that all of the 3 samples have the true mean in the interval , and for instance if one of the samples didn’t contain the true mean why would this be ? Would this be dependent on what the confidence interval is ? So would this occur if it has a smaller confidence interval let’s say 90 percent and the other two confidence intervals were each 99 percent ?

For 19b I have worked it out exactly how the mark scheme shows , but then the mark scheme says my method only gets 2 marks ?? Bit confused here .


Thanks for the help . I really just want to understandthis stuff 👍🏽
0
reply
the bear
Badges: 20
Rep:
?
#2
Report 3 years ago
#2
you are not alone in your confusion

https://en.wikipedia.org/wiki/Confidence_interval


Misunderstandings[edit]

See also: § Counter-examplesSee also: Misunderstandings of p-values
Confidence intervals are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.[7][8][9][10]
  • A 95% confidence interval does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval (i.e., a 95% probability that the interval covers the population parameter).[11] Once an experiment is done and an interval calculated, this interval either covers the parameter value or it does not; it is no longer a matter of probability. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval.[12] Neyman himself (the original proponent of confidence intervals) made this point in his original paper:[3]
0
reply
Angels1234
Badges: 14
Rep:
?
#3
Report Thread starter 3 years ago
#3
(Original post by the bear)
you are not alone in your confusion

https://en.wikipedia.org/wiki/Confidence_interval


Misunderstandings[edit]

See also: § Counter-examplesSee also: Misunderstandings of p-values
Confidence intervals are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.[7][8][9][10]
  • A 95% confidence interval does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval (i.e., a 95% probability that the interval covers the population parameter).[11] Once an experiment is done and an interval calculated, this interval either covers the parameter value or it does not; it is no longer a matter of probability. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval.[12] Neyman himself (the original proponent of confidence intervals) made this point in his original paper:[3]
So the 95 percent confidence intervals is NOT that there’s a 95 % chance that the true population mean lies within this interval? My life’s been a lie :O what is it then I don’t get it ?! Like in questions it says 95/98/99 confidence interval . If the probability is related to the reliability of an estimation procedure and not to do with the chance the interval contains the true mean then what exsctly has this got to do with questions and why is there a relation between confidence intervals and probability?
1
reply
Angels1234
Badges: 14
Rep:
?
#4
Report Thread starter 3 years ago
#4
Gregorius

Please help ? 😭😭
0
reply
Angels1234
Badges: 14
Rep:
?
#5
Report Thread starter 3 years ago
#5
RDKGames

If you have time to reply , I would also really appreciate your help
0
reply
old_engineer
Badges: 11
Rep:
?
#6
Report 3 years ago
#6
(Original post by the bear)
you are not alone in your confusion

https://en.wikipedia.org/wiki/Confidence_interval


Misunderstandings[edit]

See also: § Counter-examplesSee also: Misunderstandings of p-values
Confidence intervals are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.[7][8][9][10]
  • A 95% confidence interval does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval (i.e., a 95% probability that the interval covers the population parameter).[11] Once an experiment is done and an interval calculated, this interval either covers the parameter value or it does not; it is no longer a matter of probability. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval.[12] Neyman himself (the original proponent of confidence intervals) made this point in his original paper:[3]
Just to offer an alternative view, the Edexcel M3 textbook (2008 syllabus) states "What a 95% confidence interval tells you is that the probability that the interval contains mu is 0.95". I would recommend anyone taking the Edexcel exam to go with this interpretation, even though there may be philosophical hairs to be split elsewhere.

For part (b) of the question, circumference = 2(pi)r, so the confidence interval for the circumference is just 2(pi) that for the radius. (This is just a one-point question).

For part (c), if we accept that the three 98% confidence intervals are calculated from independent random samples, then the probability that any one of them looked at on its own contains mu is 0.98, and the probability that all three of them contain mu is 0.98^3.
2
reply
Gregorius
Badges: 14
Rep:
?
#7
Report 3 years ago
#7
(Original post by the bear)
you are not alone in your confusion

https://en.wikipedia.org/wiki/Confidence_interval


Misunderstandings[edit]

See also: § Counter-examplesSee also: Misunderstandings of p-values
Confidence intervals are frequently misunderstood, and published studies have shown that even professional scientists often misinterpret them.[7][8][9][10]
  • A 95% confidence interval does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval (i.e., a 95% probability that the interval covers the population parameter).[11] Once an experiment is done and an interval calculated, this interval either covers the parameter value or it does not; it is no longer a matter of probability. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval.[12] Neyman himself (the original proponent of confidence intervals) made this point in his original paper:[3]
I must admit, I don’t think that Wikipedia article is particularly helpful at this point, as it doesn’t go on to explain what it’s just said. What it is alluding to concerns problems at the very foundation of what we mean by “probability”.

At a UK school level, probability is interpreted from the “repeated experiment” point of view. I take a coin, I toss it 10,000 times and it comes up heads 5778 times – I’ll then model a single coin toss as a “Bernoulli experiment” with p(head) = 0.5778, and I’ll define a random variable X to be one if we get a head and zero if we get a tail.

Let’s introduce a couple of complications.

The first is the difference between a random variable and a “realization” of a random variable. Before we toss that coin we can quite happily say that it has a probability 0.5778 of coming up heads. After we toss that coin, it is either a head or a tail – there’s no probability involved any more – our random variable has been “realized” and its value is either one or zero.

Take this over to the case of confidence intervals. If I were to repeatedly draw samples from a population and calculate a 95% confidence interval for the mean from each sample, then I would expect 95% of those confidence intervals to enclose the true population mean. But if I actually calculate a confidence interval, it either encloses the population mean or it does not – there is no probability involved any more. When statisticians/probabilists refer to a confidence interval having a 95% probability of enclosing the true population value (and this is one of the things the Wikipedia article misses), they are implicitly referring to the confidence interval as a (two-valued) random variable, and not to the realization of the confidence interval – which either does or does not enclose the true population value. Perhaps we could be more rigorous by saying before the sample is drawn “if you were to draw (future tense) a random sample and calculate a 95% confidence interval, then it would have (future tense) a 95% probability of enclosing the true population mean; after you have calculated it, it either does or does not enclose the true population mean”, but journal editors would go crazy at us for wasting words!

I said there were a couple of complications; here’s the second. Perhaps there’s a difference between the cases where we know, and where we don’t know the true value of something. For example, suppose we have as unbiased coin, and a friend says to us “I’ve written either H or T on a piece of paper and sealed it in an envelope; if you toss that coin and it comes up with the value I wrote down, you win £5, if not, I win £5”. Before I toss the coin, I can quite happily say that my chances of winning are 0.5. I now toss the coin and it comes down as a head. So my coin toss either equals what’s in the envelope, or it does not. But I don’t know which as I have not yet opened the envelope! Can I still say, before opening the envelope, that my chances of winning are 0.5?

This latter complication introduces the idea that probability represents our uncertainty about some state of affairs (and is related to how things develop in the Bayesian formulation of probability). If we swing this over to the example of the confidence interval, recognizing that we do not know the true value of the population mean, then perhaps you’ll see that it might still be reasonable to say that a realized confidence interval has a certain probability of enclosing the true population mean! Going back to the Wikipedia article, one of the things that it doesn’t say is that at the time Neyman was developing the idea of confidence intervals, there was very much a focus towards the “repeated sampling” approach to probability and less towards the “uncertainty” point of view – so it’s not surprising he said what he said!
4
reply
Gregorius
Badges: 14
Rep:
?
#8
Report 3 years ago
#8
(Original post by Angels1234)
Gregorius

Please help ? 😭😭
I hope the reply I made to the bear explains the interpretation side of things - and old_engineer has nailed the actual question answers!
0
reply
Angels1234
Badges: 14
Rep:
?
#9
Report Thread starter 3 years ago
#9
(Original post by old_engineer)
Just to offer an alternative view, the Edexcel M3 textbook (2008 syllabus) states "What a 95% confidence interval tells you is that the probability that the interval contains mu is 0.95". I would recommend anyone taking the Edexcel exam to go with this interpretation, even though there may be philosophical hairs to be split elsewhere.

For part (b) of the question, circumference = 2(pi)r, so the confidence interval for the circumference is just 2(pi) that for the radius. (This is just a one-point question).

For part (c), if we accept that the three 98% confidence intervals are calculated from independent random samples, then the probability that any one of them looked at on its own contains mu is 0.98, and the probability that all three of them contain mu is 0.98^3.
Thank you for a great response ! I do understand part C now . For part B they didn’t have a 2 in the answer which is why I was confused .

This was the answer to part B https://postimg.cc/image/5pqvsomjv/
Also for part B why would the answer even be the same as part A ? Are the 2 questions not asking 2 different things . Isn’t part A just a standard confidence interval but part B doesn’t doesn’t ask for the mean or anything - so no confidence interval is required ? Isbthis question not just asking me to find an interval of values which covers 98 percent of the data .

I am going to read Gregorius answer later . So hopefully that will help me to understand more . My exam is in under 4 weeks and I’m petrified to say the least . I’m finding this stuff quite confusing to understand.
0
reply
the bear
Badges: 20
Rep:
?
#10
Report 3 years ago
#10
(Original post by Gregorius)
I must admit, I don’t think that Wikipedia article is particularly helpful at this point, as it doesn’t go on to explain what it’s just said. What it is alluding to concerns problems at the very foundation of what we mean by “probability”.

At a UK school level, probability is interpreted from the “repeated experiment” point of view. I take a coin, I toss it 10,000 times and it comes up heads 5778 times – I’ll then model a single coin toss as a “Bernoulli experiment” with p(head) = 0.5778, and I’ll define a random variable X to be one if we get a head and zero if we get a tail.

Let’s introduce a couple of complications.

The first is the difference between a random variable and a “realization” of a random variable. Before we toss that coin we can quite happily say that it has a probability 0.5778 of coming up heads. After we toss that coin, it is either a head or a tail – there’s no probability involved any more – our random variable has been “realized” and its value is either one or zero.

Take this over to the case of confidence intervals. If I were to repeatedly draw samples from a population and calculate a 95% confidence interval for the mean from each sample, then I would expect 95% of those confidence intervals to enclose the true population mean. But if I actually calculate a confidence interval, it either encloses the population mean or it does not – there is no probability involved any more. When statisticians/probabilists refer to a confidence interval having a 95% probability of enclosing the true population value (and this is one of the things the Wikipedia article misses), they are implicitly referring to the confidence interval as a (two-valued) random variable, and not to the realization of the confidence interval – which either does or does not enclose the true population value. Perhaps we could be more rigorous by saying before the sample is drawn “if you were to draw (future tense) a random sample and calculate a 95% confidence interval, then it would have (future tense) a 95% probability of enclosing the true population mean; after you have calculated it, it either does or does not enclose the true population mean”, but journal editors would go crazy at us for wasting words!

I said there were a couple of complications; here’s the second. Perhaps there’s a difference between the cases where we know, and where we don’t know the true value of something. For example, suppose we have as unbiased coin, and a friend says to us “I’ve written either H or T on a piece of paper and sealed it in an envelope; if you toss that coin and it comes up with the value I wrote down, you win £5, if not, I win £5”. Before I toss the coin, I can quite happily say that my chances of winning are 0.5. I now toss the coin and it comes down as a head. So my coin toss either equals what’s in the envelope, or it does not. But I don’t know which as I have not yet opened the envelope! Can I still say, before opening the envelope, that my chances of winning are 0.5?

This latter complication introduces the idea that probability represents our uncertainty about some state of affairs (and is related to how things develop in the Bayesian formulation of probability). If we swing this over to the example of the confidence interval, recognizing that we do not know the true value of the population mean, then perhaps you’ll see that it might still be reasonable to say that a realized confidence interval has a certain probability of enclosing the true population mean! Going back to the Wikipedia article, one of the things that it doesn’t say is that at the time Neyman was developing the idea of confidence intervals, there was very much a focus towards the “repeated sampling” approach to probability and less towards the “uncertainty” point of view – so it’s not surprising he said what he said!
thank you for that detailed exposition... i am sure it has cleared up many uncertainties surrounding this intriguing topic !
0
reply
old_engineer
Badges: 11
Rep:
?
#11
Report 3 years ago
#11
(Original post by Angels1234)
Thank you for a great response ! I do understand part C now . For part B they didn’t have a 2 in the answer which is why I was confused .

This was the answer to part B https://postimg.cc/image/5pqvsomjv/
Also for part B why would the answer even be the same as part A ? Are the 2 questions not asking 2 different things . Isn’t part A just a standard confidence interval but part B doesn’t doesn’t ask for the mean or anything - so no confidence interval is required ? Isbthis question not just asking me to find an interval of values which covers 98 percent of the data .

I am going to read Gregorius answer later . So hopefully that will help me to understand more . My exam is in under 4 weeks and I’m petrified to say the least . I’m finding this stuff quite confusing to understand.
Part (b) is trivial. It’s just a multiple of part (a) with no other working required. The mysterious factor of 2 is due to the original variable being diameter rather than radius. Sorry, should have spotted that. Circumference = (pi)D.
0
reply
Angels1234
Badges: 14
Rep:
?
#12
Report Thread starter 3 years ago
#12
(Original post by Gregorius)
I must admit, I don’t think that Wikipedia article is particularly helpful at this point, as it doesn’t go on to explain what it’s just said. What it is alluding to concerns problems at the very foundation of what we mean by “probability”.

At a UK school level, probability is interpreted from the “repeated experiment” point of view. I take a coin, I toss it 10,000 times and it comes up heads 5778 times – I’ll then model a single coin toss as a “Bernoulli experiment” with p(head) = 0.5778, and I’ll define a random variable X to be one if we get a head and zero if we get a tail.

Let’s introduce a couple of complications.

The first is the difference between a random variable and a “realization” of a random variable. Before we toss that coin we can quite happily say that it has a probability 0.5778 of coming up heads. After we toss that coin, it is either a head or a tail – there’s no probability involved any more – our random variable has been “realized” and its value is either one or zero.

Take this over to the case of confidence intervals. If I were to repeatedly draw samples from a population and calculate a 95% confidence interval for the mean from each sample, then I would expect 95% of those confidence intervals to enclose the true population mean. But if I actually calculate a confidence interval, it either encloses the population mean or it does not – there is no probability involved any more. When statisticians/probabilists refer to a confidence interval having a 95% probability of enclosing the true population value (and this is one of the things the Wikipedia article misses), they are implicitly referring to the confidence interval as a (two-valued) random variable, and not to the realization of the confidence interval – which either does or does not enclose the true population value. Perhaps we could be more rigorous by saying before the sample is drawn “if you were to draw (future tense) a random sample and calculate a 95% confidence interval, then it would have (future tense) a 95% probability of enclosing the true population mean; after you have calculated it, it either does or does not enclose the true population mean”, but journal editors would go crazy at us for wasting words!

I said there were a couple of complications; here’s the second. Perhaps there’s a difference between the cases where we know, and where we don’t know the true value of something. For example, suppose we have as unbiased coin, and a friend says to us “I’ve written either H or T on a piece of paper and sealed it in an envelope; if you toss that coin and it comes up with the value I wrote down, you win £5, if not, I win £5”. Before I toss the coin, I can quite happily say that my chances of winning are 0.5. I now toss the coin and it comes down as a head. So my coin toss either equals what’s in the envelope, or it does not. But I don’t know which as I have not yet opened the envelope! Can I still say, before opening the envelope, that my chances of winning are 0.5?

This latter complication introduces the idea that probability represents our uncertainty about some state of affairs (and is related to how things develop in the Bayesian formulation of probability). If we swing this over to the example of the confidence interval, recognizing that we do not know the true value of the population mean, then perhaps you’ll see that it might still be reasonable to say that a realized confidence interval has a certain probability of enclosing the true population mean! Going back to the Wikipedia article, one of the things that it doesn’t say is that at the time Neyman was developing the idea of confidence intervals, there was very much a focus towards the “repeated sampling” approach to probability and less towards the “uncertainty” point of view – so it’s not surprising he said what he said!
Ive only had a chance to read this right now and wanted to say your explanation is fantastic. Its certainly cleared up a lot of confusion and i really appreciate it . I feel like this is what they should explain in the book otherwise students will be confused. I have one question id like to ask , when you said " a realized confidence interval has a certain probability of enclosing the true population mean! I thought that a realised confidence interval , which is found after do we do calculations , not before , either contains or doesnt contain mu , so if something is realised (after calculation ) isnt the probability either one or 0? and isnt there only a probability before any calculations, for example before tossing a tail the probability using an unbiased coin is 0.5 and after tossing its zero or 1 , like you mentioned above ?

Thanks again
0
reply
Gregorius
Badges: 14
Rep:
?
#13
Report 3 years ago
#13
(Original post by Angels1234)
Ive only had a chance to read this right now and wanted to say your explanation is fantastic. Its certainly cleared up a lot of confusion and i really appreciate it . I feel like this is what they should explain in the book otherwise students will be confused. I have one question id like to ask , when you said " a realized confidence interval has a certain probability of enclosing the true population mean! I thought that a realised confidence interval , which is found after do we do calculations , not before , either contains or doesnt contain mu , so if something is realised (after calculation ) isnt the probability either one or 0? and isnt there only a probability before any calculations, for example before tossing a tail the probability using an unbiased coin is 0.5 and after tossing its zero or 1 , like you mentioned above ?

Thanks again
I wrote that when I was writing about other ways of thinking about probability - In particular, in situations where our knowledge of certain facts is unknown and we use probability to express the degree of lack of knowledge.

As far as you're concerned, at the level you're studying, you should keep to the explanations I given before that!
0
reply
X

Quick Reply

Attached files
Write a reply...
Reply
new posts
Back
to top
Latest
My Feed

See more of what you like on
The Student Room

You can personalise what you see on TSR. Tell us a little about yourself to get started.

Personalise

Do you think receiving Teacher Assessed Grades will impact your future?

I'm worried it will negatively impact me getting into university/college (88)
38.26%
I'm worried that I’m not academically prepared for the next stage in my educational journey (25)
10.87%
I'm worried it will impact my future career (17)
7.39%
I'm worried that my grades will be seen as ‘lesser’ because I didn’t take exams (55)
23.91%
I don’t think that receiving these grades will impact my future (28)
12.17%
I think that receiving these grades will affect me in another way (let us know in the discussion!) (17)
7.39%

Watched Threads

View All