Wednesday 30 April 2008

Disproportionately useful theories #6: Generalised Method of Moments

It's been a while since my last "disproportionately useful theories" entry (and that was wrongly called number four when it was five, but that's all sorted now). The series returns with one for the econometricians, focussing on the Generalized Method of Moments or GMM as it is known.

The predecessor to GMM was the method of moments, which works by saying that if you have a random numerical outcome which depends on something else, and can work out the expected value of some feature of the outcome in terms of that something else, then you can equate the expected value with the observed value to get an estimate of that something else. The "moment" is another name for the expected value in terms of the something else, roughly speaking.

An example might clear up the statement. If a dice is biased - say its six side is bigger than the others - then the chance of getting a six (this is the random numerical outcome) depends on the size of the area divided by the area of all sides in total (this is the something else), perhaps the chance is equal to the ratio, so we can calculate the expected average score from ten rolls. Then if the dice has been rolled ten times and we have the average score (this is the observed value), we can calculate the ratio of the areas by equating the expected and observed values (this is a moment equation). OK, we could just have measured the dice's sides, but often in economic measurements we won't have the "something else" to measure directly, just the outcomes.

The outcomes might also have some other information which could depend on the size of the six side, perhaps the number of ones compared to the number of sixes in ten rolls. Then if an expected average value can be compared with the observed value, a new estimate of the size of the six side can be made. The two estimates of the size might not agree, and that's where GMM comes in. Under GMM, you choose a value for the estimated size which minimises
(expected - observed value for the first estimation)^2
+ b*(expected - observed value for the second estimation)^2
where b is a constant reflecting the relative importance of the two methods for getting a final estimate.

This is a really rough description, and to stop the post feeling too much like a lecture, here's the point. GMM estimates were shown to be very good estimates, and many of their important statistical properties were worked out by a single author in 1982. It was demonstrated that if you choose the moment equations and the weighting correctly, you get exactly the same estimates and statistical properties as many other methods of estimation, such as the least squares estimates. So the theories about GMM are widely useful.

No comments: