# Normal Distribution question

Watch
Announcements
#1
why do we need to create a new distribution for Xbar? I tried doing it with just X and it gave the wrong answer. I thought the normal distribution is always the same, no?
danke
Last edited by mathshelppls123; 1 week ago
0
1 week ago
#2
because you need the sample mean
0
#3
(Original post by englishhopeful98)
because you need the sample mean
thanks, but we need the critical region

a normal distribution curve looks the same always, no? that's why m confused why we need to work it out using the X bar thing.
0
1 week ago
#4
(Original post by mathshelppls123)
thanks, but we need the critical region

a normal distribution curve looks the same always, no? that's why m confused why we need to work it out using the X bar thing.
because then it will be representative of the whole sample
0
#5
(Original post by englishhopeful98)
because then it will be representative of the whole sample
a normal distribution curve looks the same always. so surely the critical region of the whole sample should be the same as the smaller sample?
0
1 week ago
#6
(Original post by mathshelppls123)
a normal distribution curve looks the same always. so surely the critical region of the whole sample should be the same as the smaller sample?
The (normal) distribution of X (original population) is different from the distribution associated with the sample mean, Xbar, where you take a random sample of size N from the population and estimate the population mean (you assume you know the population standard deviation). If you take different random samples of size N, each time you'll get a different estimate of the mean, hence there is a (normal) distribution associated the mean estimate. The mean of the mean estimates is the mean of the original population, but the distributions are not the same because the standard deviations are different.

As a simple thought experiment, if you took a single sample (N=1) and used that as the mean estimate, then the sample mean distribution and the population distribution would indeed be the same.

If you took a random sample of size N=1000000...., the estimate of the population mean should be excellent (reliable, little spread) and the associated distribution would be a near zero width "spike" (a very thin but very high normal distribution) centered on the population mean, again as should be clear from the sample mean standard deviation which is the population standard deviation divided by sqrt(N).

For values of N inbetween these two extreme cases, youd expect that the uncertainty (standard deviation) in the sample mean estimate decreases as you increase N because youre using more data to estimate the mean. This is what the divide by sqrt(N) represents in the sample mean standard deviation formula.

So they are indeed different distributions for the sample mean and the original population. A bit more detail in
https://stats.libretexts.org/Bookshe...0sample%20size.
Last edited by mqb2766; 1 week ago
1
#7
(Original post by mqb2766)
The (normal) distribution of X (original population) is different from the distribution associated with the sample mean, Xbar, where you take a random sample of size N from the population and estimate the population mean (you assume you know the population standard deviation). If you take different random samples of size N, each time you'll get a different estimate of the mean, hence there is a (normal) distribution associated the mean estimate. The mean of the mean estimates is the mean of the original population, but the distributions are not the same because the standard deviations are different.

As a simple thought experiment, if you took a single sample (N=1) and used that as the mean estimate, then the sample mean distribution and the population distribution would indeed be the same.

If you took a random sample of size N=1000000...., the estimate of the population mean should be excellent (reliable, little spread) and the associated distribution would be a near zero width "spike" (a very thin but very high normal distribution) centered on the population mean, again as should be clear from the sample mean standard deviation which is the population standard deviation divided by sqrt(N).

For values of N inbetween these two extreme cases, youd expect that the uncertainty (standard deviation) in the sample mean estimate decreases as you increase N because youre using more data to estimate the mean. This is what the divide by sqrt(N) represents in the sample mean standard deviation formula.

So they are indeed different distributions for the sample mean and the original population. A bit more detail in
https://stats.libretexts.org/Bookshe...0sample%20size.
thanks a lot
0
X

new posts
Back
to top
Latest
My Feed

### Oops, nobody has postedin the last few hours.

Why not re-start the conversation?

see more

### See more of what you like onThe Student Room

You can personalise what you see on TSR. Tell us a little about yourself to get started.

### Poll

Join the discussion

#### How would you describe the quality of the digital skills you're taught at school?

Excellent (33)
9.82%
Okay (98)
29.17%
A bit lacking (122)
36.31%
Not good at all (83)
24.7%