Methods of Moments EstimationWatch

Announcements
#1
Let  , where n is known. Find the methods of moments estimator (MME) of .

So, the answer just goes from:

Therefore, , and hence = X/n.

I thought, it'd be , then you find the first sample moment i.e. the sample mean and then equate the two.

So = and so and so ??

Also, is it right to say estimator of E(X) = estimator of * n because n is known, so doesn't require to be estimated?
Last edited by Chittesh14; 1 month ago
0
4 weeks ago
#2
Notice that "n" here is a (known) parameter of the distribution - it would be so easy to mistake it for a sample size; but if you do want to think about a sample, you'll have to use another letter, m, say. Notice, however, that there is no mention of a sample!

The point is this: in the population, E[X] = np. So for a method of moments estimator, you would set the first sample moment equal to np and solve for p. So something like: Which you can then solve. Back to your problem: What happens if the sample size m = 1? Notice that you get the estimator that you have given!

I thought, it'd be , then you find the first sample moment i.e. the sample mean and then equate the two.
That's exactly what has been done; only the sample size is 1.

Also, is it right to say estimator of E(X) = estimator of * n because n is known, so doesn't require to be estimated?
n is known, so it doesn't need to be estimated. However, I'm not sure what the rest of what you've asked means.
0
#3
(Original post by Gregorius)
Notice that "n" here is a (known) parameter of the distribution - it would be so easy to mistake it for a sample size; but if you do want to think about a sample, you'll have to use another letter, m, say. Notice, however, that there is no mention of a sample!

The point is this: in the population, E[X] = np. So for a method of moments estimator, you would set the first sample moment equal to np and solve for p. So something like: Which you can then solve. Back to your problem: What happens if the sample size m = 1? Notice that you get the estimator that you have given!

That's exactly what has been done; only the sample size is 1.

n is known, so it doesn't need to be estimated. However, I'm not sure what the rest of what you've asked means.
Thank you, I seem to understand it better now. It seems like, we just use 'X' instead of 'X_1' because it is more generic right? I am understanding it better now, n is the number of trials but not necessarily the sample size. You can take a sample of e.g. 'm objects' and have n trials of the experiment, so they are distinct.
0
4 weeks ago
#4
(Original post by Chittesh14)
Thank you, I seem to understand it better now. It seems like, we just use 'X' instead of 'X_1' because it is more generic right?
I guess so! If that's out of your lecture notes, I think the explanation is a bit obscure.

I am understanding it better now, n is the number of trials but not necessarily the sample size. You can take a sample of e.g. 'm objects' and have n trials of the experiment, so they are distinct.
Careful! There's plenty of opportunity here to get confused, arising from the fact that the Binomial Bin(n, p) distribution can be obtained as the distribution of a sum of n Bernoulli distributed random variables with fixed p.

So if you are working with Bernoulli distributed variables, n trials will give you n random variables, the sum of which is distributed as Bin(n, p).

But if you are working with Bin(n, p) distributed random variables to begin with, then m trials will give you a sample of size m (and their sum will be distributed as Bin(nm, p)).

In general, if you have a random variable X with distribution G, k "trials" will refer to correspond to making k random draws from the distribution G.
0
#5
(Original post by Gregorius)
I guess so! If that's out of your lecture notes, I think the explanation is a bit obscure.

Careful! There's plenty of opportunity here to get confused, arising from the fact that the Binomial Bin(n, p) distribution can be obtained as the distribution of a sum of n Bernoulli distributed random variables with fixed p.

So if you are working with Bernoulli distributed variables, n trials will give you n random variables, the sum of which is distributed as Bin(n, p).

But if you are working with Bin(n, p) distributed random variables to begin with, then m trials will give you a sample of size m (and their sum will be distributed as Bin(nm, p)).

In general, if you have a random variable X with distribution G, k "trials" will refer to correspond to making k random draws from the distribution G.
No haha, the explanation came from my brain, maybe that's why it's obscure lol.
Oh yes, that makes sense - thank you!
0
X

new posts
Latest
My Feed

Oops, nobody has postedin the last few hours.

Why not re-start the conversation?

see more

See more of what you like onThe Student Room

You can personalise what you see on TSR. Tell us a little about yourself to get started.

University open days

• Imperial College London
Wed, 26 Jun '19
• University of Brighton
Wed, 26 Jun '19
• University of Plymouth
Wed, 26 Jun '19

Poll

Join the discussion

Who do you think will be the next PM?

Boris Johnson (224)
72.73%
Jeremy Hunt (84)
27.27%