# Methods of Moments EstimationWatch

Announcements
#1

Question is at the bottom of the image really.
If I have a formula for my population parameter in terms of the population moments.
Can I just say the estimate of that population parameter is the same as: substituting the estimates of those population moments i.e. the sample moments, into the formula?
Last edited by Chittesh14; 4 weeks ago
0
4 weeks ago
#2
(Original post by Chittesh14)
I'm afraid I couldn't follow your written work. But it looks like you're making heavy weather of it! The point of the method of moments is this: In the population you have and . In the sample, you have first and second moments given by and , respectively. You get the method of moments estimators and by equating these to get a set of simultaneous equations.

The rest is algebra.
0
#3
(Original post by Gregorius)
I'm afraid I couldn't follow your written work. But it looks like you're making heavy weather of it! The point of the method of moments is this: In the population you have and . In the sample, you have first and second moments given by and , respectively. You get the method of moments estimators and by equating these to get a set of simultaneous equations.

The rest is algebra.
OK, that makes much more sense - the way you've written it. I was always confused which to estimate and all that stuff.
So, can I say: , and hence the estimator of is given by the sum of the estimators of and .
So, ?
0
4 weeks ago
#4
(Original post by Chittesh14)
OK, that makes much more sense - the way you've written it. I was always confused which to estimate and all that stuff.
This is one of those instances where it's easy to get lost in notation (I can remember being confused by this forty years ago!) and it's best, perhaps, to put it in words. In the population, the moments are functions of the parameters that specify the population distribution. You have to find these functions. Then turn to the sample: the sample moments are set equal to these functions applied to the estimators that you're trying to find. Now solve these equations for the estimators.

So, can I say: , and hence the estimator of is given by the sum of the estimators of and .
So, ?
I think you're getting muddled - but why would you want to do any more theory? You have two equations in the two unknown estimators and , solve them.
0
#5
(Original post by Gregorius)
This is one of those instances where it's easy to get lost in notation (I can remember being confused by this forty years ago!) and it's best, perhaps, to put it in words. In the population, the moments are functions of the parameters that specify the population distribution. You have to find these functions. Then turn to the sample: the sample moments are set equal to these functions applied to the estimators that you're trying to find. Now solve these equations for the estimators.
Sorry, got carried away with sports a bit.
Thank you, yeah the notation is awful and tough to deal with until you really understand it properly.
OK makes sense.

I think you're getting muddled - but why would you want to do any more theory? You have two equations in the two unknown estimators and , solve them.
Regarding this, I was just saying it in general. Of course, after reading your method - I would follow the method.
But, in general - I was just saying that if I know the variance is equal to . So, let's say I didn't read your method - and I am just simply wondering:
I already know that the variance is a function of the population moments i.e. 2nd population moment - (1st population moment)^2.... so, can I say that: if I put an estimator on each of the variables, the equation is still valid.
I.e. an estimator for Var(X) is equal to the (estimator for the 2nd population moment) - (estimator for the 1st population moment)^2 i.e. ?

OR: Is this not the general case and you recommend following the normal method of finding population moments and sample moments, equating them and then rearranging for the parameter estimators?
0
3 weeks ago
#6
(Original post by Chittesh14)
I already know that the variance is a function of the population moments i.e. 2nd population moment - (1st population moment)^2.... so, can I say that: if I put an estimator on each of the variables, the equation is still valid.
I.e. an estimator for Var(X) is equal to the (estimator for the 2nd population moment) - (estimator for the 1st population moment)^2 i.e. ?
This gets things back-to-front! The point here is that you're trying to find estimators for the population parameters; in general, things won't come out anything like as easily as they do here. The estimator for variance that you're using in the sample hasn't popped out of thin air, and in fact here it's a biased (although consistent) estimator.

Perhaps the source of your confusion is that in this example, the population parameters for which you're trying to find estimators, have a particularly simple relationship to the population moments. In particular, here we have equal to the population variance.

I suggest that you try the exercise of finding the method of moments estimators for the paramaters a and b of uniform distribution on the closed interval [a, b], using the first two population moments. Then mess around with the estimates for a and b using some possible samples; you'll get a better idea of how this all works, and how it sometimes doesn't work.
0
#7
(Original post by Gregorius)
This gets things back-to-front! The point here is that you're trying to find estimators for the population parameters; in general, things won't come out anything like as easily as they do here. The estimator for variance that you're using in the sample hasn't popped out of thin air, and in fact here it's a biased (although consistent) estimator.

Perhaps the source of your confusion is that in this example, the population parameters for which you're trying to find estimators, have a particularly simple relationship to the population moments. In particular, here we have equal to the population variance.

I suggest that you try the exercise of finding the method of moments estimators for the paramaters a and b of uniform distribution on the closed interval [a, b], using the first two population moments. Then mess around with the estimates for a and b using some possible samples; you'll get a better idea of how this all works, and how it sometimes doesn't work.
Yes, thank you! This is what I wanted to know hehe, because it is done in the way I described in my notes, which caused my confusion.
I just wanted to know if it'd work everytime like this, and I'm now satisfied because I know it won't lol, so I will stick to the original method you described.

Finally got it, so in cases where the relationship is not so obvious, this method would not work. Thank you!!!!
I will try the exercise that you have said
0
X

new posts
Latest
My Feed

### Oops, nobody has postedin the last few hours.

Why not re-start the conversation?

see more

### See more of what you like onThe Student Room

You can personalise what you see on TSR. Tell us a little about yourself to get started.

### University open days

• Solent University
Mon, 20 May '19
• London Metropolitan University
Tue, 21 May '19
• Brunel University London
Wed, 22 May '19

### Poll

Join the discussion

#### How has 2019 been so far?

Amazing!!! (39)
5.7%
Fairly positive (228)
33.33%
Just another year... (271)
39.62%
Is it 2020 yet? (146)
21.35%

View All
Latest
My Feed

### Oops, nobody has postedin the last few hours.

Why not re-start the conversation?

### See more of what you like onThe Student Room

You can personalise what you see on TSR. Tell us a little about yourself to get started.