Methods of Moments Estimation Watch

Chittesh14
Badges: 19
Rep:
?
#1
Report Thread starter 1 month ago
#1
Let XBin(n, \pi), where n is known. Find the methods of moments estimator (MME) of \pi.

So, the answer just goes from:

Therefore, E(X) = n\pi, and hence \hat{\pi} = X/n.

I thought, it'd be E(X) = n\pi, then you find the first sample moment i.e. the sample mean and then equate the two.

So E\hat{(X)} = \bar{X} and so \hat{\pi}n =\bar{X} and so \hat{\pi} = \frac{\bar{X}}{n}??

Also, is it right to say estimator of E(X) = estimator of \pi * n because n is known, so doesn't require to be estimated?
Last edited by Chittesh14; 1 month ago
0
reply
Gregorius
Badges: 14
Rep:
?
#2
Report 4 weeks ago
#2
(Original post by Chittesh14)
Let XBin(n, \pi), where n is known. Find the methods of moments estimator (MME) of \pi.

So, the answer just goes from:

Therefore, E(X) = n\pi, and hence \hat{\pi} = X/n.
Notice that "n" here is a (known) parameter of the distribution - it would be so easy to mistake it for a sample size; but if you do want to think about a sample, you'll have to use another letter, m, say. Notice, however, that there is no mention of a sample!

The point is this: in the population, E[X] = np. So for a method of moments estimator, you would set the first sample moment equal to np and solve for p. So something like:

 \displaystyle np = (1/m) \sum_{i = 1}^{m} x_{i}

Which you can then solve. Back to your problem: What happens if the sample size m = 1? Notice that you get the estimator that you have given!


I thought, it'd be E(X) = n\pi, then you find the first sample moment i.e. the sample mean and then equate the two.
That's exactly what has been done; only the sample size is 1.

Also, is it right to say estimator of E(X) = estimator of \pi * n because n is known, so doesn't require to be estimated?
n is known, so it doesn't need to be estimated. However, I'm not sure what the rest of what you've asked means.
0
reply
Chittesh14
Badges: 19
Rep:
?
#3
Report Thread starter 4 weeks ago
#3
(Original post by Gregorius)
Notice that "n" here is a (known) parameter of the distribution - it would be so easy to mistake it for a sample size; but if you do want to think about a sample, you'll have to use another letter, m, say. Notice, however, that there is no mention of a sample!

The point is this: in the population, E[X] = np. So for a method of moments estimator, you would set the first sample moment equal to np and solve for p. So something like:

 \displaystyle np = (1/m) \sum_{i = 1}^{m} x_{i}

Which you can then solve. Back to your problem: What happens if the sample size m = 1? Notice that you get the estimator that you have given!




That's exactly what has been done; only the sample size is 1.



n is known, so it doesn't need to be estimated. However, I'm not sure what the rest of what you've asked means.
Thank you, I seem to understand it better now. It seems like, we just use 'X' instead of 'X_1' because it is more generic right? I am understanding it better now, n is the number of trials but not necessarily the sample size. You can take a sample of e.g. 'm objects' and have n trials of the experiment, so they are distinct.
0
reply
Gregorius
Badges: 14
Rep:
?
#4
Report 4 weeks ago
#4
(Original post by Chittesh14)
Thank you, I seem to understand it better now. It seems like, we just use 'X' instead of 'X_1' because it is more generic right?
I guess so! If that's out of your lecture notes, I think the explanation is a bit obscure.

I am understanding it better now, n is the number of trials but not necessarily the sample size. You can take a sample of e.g. 'm objects' and have n trials of the experiment, so they are distinct.
Careful! There's plenty of opportunity here to get confused, arising from the fact that the Binomial Bin(n, p) distribution can be obtained as the distribution of a sum of n Bernoulli distributed random variables with fixed p.

So if you are working with Bernoulli distributed variables, n trials will give you n random variables, the sum of which is distributed as Bin(n, p).

But if you are working with Bin(n, p) distributed random variables to begin with, then m trials will give you a sample of size m (and their sum will be distributed as Bin(nm, p)).

In general, if you have a random variable X with distribution G, k "trials" will refer to correspond to making k random draws from the distribution G.
0
reply
Chittesh14
Badges: 19
Rep:
?
#5
Report Thread starter 4 weeks ago
#5
(Original post by Gregorius)
I guess so! If that's out of your lecture notes, I think the explanation is a bit obscure.



Careful! There's plenty of opportunity here to get confused, arising from the fact that the Binomial Bin(n, p) distribution can be obtained as the distribution of a sum of n Bernoulli distributed random variables with fixed p.

So if you are working with Bernoulli distributed variables, n trials will give you n random variables, the sum of which is distributed as Bin(n, p).

But if you are working with Bin(n, p) distributed random variables to begin with, then m trials will give you a sample of size m (and their sum will be distributed as Bin(nm, p)).

In general, if you have a random variable X with distribution G, k "trials" will refer to correspond to making k random draws from the distribution G.
No haha, the explanation came from my brain, maybe that's why it's obscure lol.
Oh yes, that makes sense - thank you!
0
reply
X

Quick Reply

Attached files
Write a reply...
Reply
new posts
Latest
My Feed

See more of what you like on
The Student Room

You can personalise what you see on TSR. Tell us a little about yourself to get started.

Personalise

University open days

  • Imperial College London
    Undergraduate Open Day Undergraduate
    Wed, 26 Jun '19
  • University of Brighton
    Photography MA open afternoon Postgraduate
    Wed, 26 Jun '19
  • University of Plymouth
    General Open Day Undergraduate
    Wed, 26 Jun '19

Who do you think will be the next PM?

Boris Johnson (224)
72.73%
Jeremy Hunt (84)
27.27%

Watched Threads

View All