Monday 13 October 2008

Formal proof of the selection tendency in GMM difference

A few weeks ago, I gave an informal proof of the tendency for GMM difference estimation to select high a(i) parameters in a model y(i,t)=a(i).y(i,t-1)+b.x(i,t)+ v(t) where the i is a group indicator and t is a time indicator and v is a zero mean error, and the model is estimated assuming that the a(i)s are a constant a across groups. Here is a quick formal proof.

The GMM difference estimator is

alpha = Y(t-1)*M*Y(t)/Y(t-1)*M*Y(t-1)

where M is a matrix and the Y(t) denotes a stacked vector of the differences between y(t) and y(t-1). It is connected to Y(t) by the relation

Y(t)^ = (alpha(1).Y(1,t-1)^,alpha(2).Y(2,t-1)^,...,alpha(N).Y(N,t-1)^) + V(t)^

where the ^ denotes transposition, the Y(i,t) denotes a stacked vector of the components of Y(t) specific to group i, and the V(t) is a stacked version of the errors corresponding to Y(t). If we insert the expression in the GMM estimator we obtain

alpha = alpha' + Y(t-1)*M*V(t)/Y(t-1)*M*Y(t-1)

where alpha' is a weighted average of the set {alpha(i)} with coefficients summing to unity. As t--> infinity, the largest coefficients are expected to be on high alpha series since they will have the largest y series. Thus alpha' tends to be near the upper part of the alpha(i) range.

To complete the proof, we note that the second term has a denominator converging in probability to a constant, and a numerator converging in law to a normal distribution. Thus, the expression tends to a distribution equal to the ratio of the two and we can take expectations. E(M*V(t))=0 is the assumption of GMM difference, and the proof follows.

No comments: