I have been running some Monte Carlo simulations to see the bias produced by estimating an AR(1) model (x(t)=a+p.x(t-1)+N(0,1), a and p constants) using MLE. The MLE estimates coincide with the OLS estimates, so there is a small sample bias in the estimate for p, and the bias is downwards near p=1. The proof follows quickly from the OLS formula.
In the simulations, there is clear downwards bias for short time periods near p=1, even with large numbers of simulations. For large time periods (1000 or so for quite small differences), the theoretically predicted convergence was observed. Short time periods are not that short - the simulations at 50 time periods gave a clearly biased p parameter. And everything was very sensitive to the starting values of x(0) and a, with higher values tending to increase the estimate of p. For a hundred simulations, 30 periods, a=4, parameter of 0.7, and seed of 8, the mean MLE estimate from all of the simulations was 0.83, with a range of (0.42, 0.95). With the same parameters except a=3, the mean was 0.62 with range (0.19, 0.93). These are the sort of parameter values which occur in economic growth models, so the biases are relevant for reported estimates.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment