## Saturday, 29 November 2008

### Never enough microfoundations

My previous post suggests that declining returns to scale could be generated by a more fundamental productive process. Declining returns to scale would be considered as part of the microfoundations of many macromodels. The process of decomposing microfoundations into their own microfoundations could continue as far as you like, and the diversity of knowledge and its possible applied combinations would rise exponentially as a result. Economics I think should be treated as a science, and the same procedures of extensive and intensive analysis carried out to get the same benefits.

### Offsetting diminishing returns

My last post on the potentially higher-than-exponential product diversity generated by scientific innovation (which could equally apply to productive process combination) suggests one possible route by which a productive factor might exhibit increasing returns to scale. Usually, it is assumed that factors exhibit diminishing returns through exponential contraction, but if the usual decline is offset by increasing product diversity then eventually the greater-than-exponential growth may dominate.

The argument for declining returns to scale often given is that the factor will have less of the other productive factors with which to work, so the output will fall off. There is enough empirical evidence to support the claim, and it is intuitively clear as well. But increasing product diversity could offset it, indicating that the rationale is not the most fundamental specification of productive interactions. A detailed microfounded production function embedding the two causes would help to clarify the situation and would also suggest as yet unknown influences on returns to scale.

The argument for declining returns to scale often given is that the factor will have less of the other productive factors with which to work, so the output will fall off. There is enough empirical evidence to support the claim, and it is intuitively clear as well. But increasing product diversity could offset it, indicating that the rationale is not the most fundamental specification of productive interactions. A detailed microfounded production function embedding the two causes would help to clarify the situation and would also suggest as yet unknown influences on returns to scale.

### Competing theories about technologically driven endogenous growth

I have indicated in the last few weeks the importance attached to technological innovation in economic growth. The ideas about how it occurs could also be used to describe the processes of academic research, so the theories should come quite naturally to a reflective analyst.

The most immediate theory about research views its characterising process as producing incrementally improved new products for the market which are subject to constant demand, while another theory emphasises the variety of capital goods produced by the market, while another related set of theories consider the characterising process to be the act of innovation as lowering the future cost of learning. The last two theories stress human knowledge as accumulative, in the same way as economic researchers attempt to learn from the work of other researchers or find short cuts in their past studies to speed up future production. The theories tend to recognise the abstract scientific elements of innovation, but stress more the applied elements where people have to invest a great deal of time in finding out how to work with the scientific processes.

The underlying nature of the science in scientific innovation is, I think, relatively downplayed. Science tends today to be intensive as much as extensive, trying to discover the underlying processes of known phenomena rather than looking for new ones. The result is that the number of possible scientific applications can be subject to exponential, or higher, increase. Consider, for example, if we know about the existence of the atom, and then are informed that atoms consist of protons, electrons, and neutrons. We had just one object for use before the information; now we have three. If the subatomic particles are themselves split up into three components, we now have 3 x 3 = 9 components. The number of combinations of inputs in the first instance is two, an atom or not; in the second instance, it is 2^3, electron or not, proton or not, neutron or not; in the third instance it is 2^(3^2). This greater-than-exponential growth occurs in chemistry and biology too, for example in the decomposition and rearrangement of DNA strands.

The most immediate theory about research views its characterising process as producing incrementally improved new products for the market which are subject to constant demand, while another theory emphasises the variety of capital goods produced by the market, while another related set of theories consider the characterising process to be the act of innovation as lowering the future cost of learning. The last two theories stress human knowledge as accumulative, in the same way as economic researchers attempt to learn from the work of other researchers or find short cuts in their past studies to speed up future production. The theories tend to recognise the abstract scientific elements of innovation, but stress more the applied elements where people have to invest a great deal of time in finding out how to work with the scientific processes.

The underlying nature of the science in scientific innovation is, I think, relatively downplayed. Science tends today to be intensive as much as extensive, trying to discover the underlying processes of known phenomena rather than looking for new ones. The result is that the number of possible scientific applications can be subject to exponential, or higher, increase. Consider, for example, if we know about the existence of the atom, and then are informed that atoms consist of protons, electrons, and neutrons. We had just one object for use before the information; now we have three. If the subatomic particles are themselves split up into three components, we now have 3 x 3 = 9 components. The number of combinations of inputs in the first instance is two, an atom or not; in the second instance, it is 2^3, electron or not, proton or not, neutron or not; in the third instance it is 2^(3^2). This greater-than-exponential growth occurs in chemistry and biology too, for example in the decomposition and rearrangement of DNA strands.

## Thursday, 27 November 2008

### Technology's relation to capital accumulation

There is a debate in economics about how much economic growth is caused by accumulation of capital (physical or educational), and how much by accumulation of technological knowledge, and whether one requires the other to have an effect.

Much of the debate takes place in theoretical domains, because the data, when analysed with conventional estimation techniques, can often be interpreted in more than one way. To give an example, national output might be increasing because of improvements in knowledge with capital investment occurring in the same proportions because investment has higher profits as a result of the knowledge. Or it might be increasing because of capital accumulation, and skills build up as a result of increased capital. So we have data where knowledge, capital, and output are all rising, and can't be separated by usual studies of common international datasets.

Much of the debate takes place in theoretical domains, because the data, when analysed with conventional estimation techniques, can often be interpreted in more than one way. To give an example, national output might be increasing because of improvements in knowledge with capital investment occurring in the same proportions because investment has higher profits as a result of the knowledge. Or it might be increasing because of capital accumulation, and skills build up as a result of increased capital. So we have data where knowledge, capital, and output are all rising, and can't be separated by usual studies of common international datasets.

### TV program criticising the operation of aid

There was a UK television program this week criticising the operation of aid in Africa. You can find the web link here.

In my experience, many former aid workers criticise the operation of aid. Current aid workers tend to be less critical, oddly enough. The award for most critical analysis goes to former World Bank employees, who have often been very harsh in person and in print.

In my experience, many former aid workers criticise the operation of aid. Current aid workers tend to be less critical, oddly enough. The award for most critical analysis goes to former World Bank employees, who have often been very harsh in person and in print.

### Unbiased IV estimation combining a endogenous variable and its endogenous lag

Here is a method of instrumenting a regression equation if there are no fully exogenous variables, but it is possible to make certain assumptions about the relation between the available endogenous variables. These assumptions are more likely to be met by lagged variables.

The unbiased estimation of the parameter B in the matrix equation Y = X.B + e presents difficulties if the available instruments W are correlated with the error term, since under IV estimation we have B(est) = B + E((W'X)^-1.(W'e)), and the expectation will be non-zero since E(W'e)<>0.

We may be able to find two instruments V and W such that

Var((V.X)^-1.V'e)=a^2.Var((W.X)^-1.W'e) + independent error

and

E((V.X)^-1.V'e)=a.E((W.X)^-1.W'e) + independent error.

The conditions say that the two instrumental variables are related in their behaviour relative to the error term, and may plausibly apply to W if it is the original variable X and V is one of its lags, when a would probably be expected to be less the unity. We can regress Var(V'e) on Var(W'e) to get an unbiased estimate of a. Then the first instrumental variable estimation using W on X gets E(B(est,W)) = B + delta where delta is the bias and E(B(est, V)) = B + a.delta, and we can calculate the bias as (B(est, V) - B(est, V))/(1-a(est)). This bias is asymptotically correct because of convergence in distribution of the numerator and in probability of the denominator. Thus, we can calculate the unbiased B as B(est, W) - bias(est).

Geometrically, the assumptions amount to allowing further projections of V on W beyond the usual IV ones. The assumptions no doubt could be weakened.

The unbiased estimation of the parameter B in the matrix equation Y = X.B + e presents difficulties if the available instruments W are correlated with the error term, since under IV estimation we have B(est) = B + E((W'X)^-1.(W'e)), and the expectation will be non-zero since E(W'e)<>0.

We may be able to find two instruments V and W such that

Var((V.X)^-1.V'e)=a^2.Var((W.X)^-1.W'e) + independent error

and

E((V.X)^-1.V'e)=a.E((W.X)^-1.W'e) + independent error.

The conditions say that the two instrumental variables are related in their behaviour relative to the error term, and may plausibly apply to W if it is the original variable X and V is one of its lags, when a would probably be expected to be less the unity. We can regress Var(V'e) on Var(W'e) to get an unbiased estimate of a. Then the first instrumental variable estimation using W on X gets E(B(est,W)) = B + delta where delta is the bias and E(B(est, V)) = B + a.delta, and we can calculate the bias as (B(est, V) - B(est, V))/(1-a(est)). This bias is asymptotically correct because of convergence in distribution of the numerator and in probability of the denominator. Thus, we can calculate the unbiased B as B(est, W) - bias(est).

Geometrically, the assumptions amount to allowing further projections of V on W beyond the usual IV ones. The assumptions no doubt could be weakened.

## Monday, 24 November 2008

### Technology's S-curve and parameter standard errors

Technology is often proposed to spread slowly at first, then become more widely accepted at a faster rate, then slow down in its spread as most people become familiar with it or its utility declines. Graphically, it follows an S-curve over time, like this:

The S-curve may be modelled by fitting a time dependent function like the logistic function:

Technology use = a0/(1+exp{-(a1+a2*time)})

where the as are constants to be estimated. They can be estimated by non-linear least squares, which minimises the sum of (Observed values - predicted values)^2. The usual procedure is to approximate the predicted values by their Taylor series linearisation, or its numerical approximation, so we have to minimise the sum of

(Observed Values - b0 + a0*f1(t) + a1*f2(t) + a2*f3(t))^2

where the fs are functions of t, time.

A complication arises in estimation of standard errors. Because the coefficients are functions of time, their cross-product matrix coefficients do not all converge at the usual least squares rate equal to the sample size, and so the estimates of least squares standard errors will be divergent.

Accurate convergent standard errors could be calculated by working out the order of magnitude of the sums of the fs, and premultiplying the cross-product matrix by a suitable rebasing matrix. The precise forms of the sums may not be neat, but the orders of magnitude should be accessible without too much difficulty.

The S-curve may be modelled by fitting a time dependent function like the logistic function:

Technology use = a0/(1+exp{-(a1+a2*time)})

where the as are constants to be estimated. They can be estimated by non-linear least squares, which minimises the sum of (Observed values - predicted values)^2. The usual procedure is to approximate the predicted values by their Taylor series linearisation, or its numerical approximation, so we have to minimise the sum of

(Observed Values - b0 + a0*f1(t) + a1*f2(t) + a2*f3(t))^2

where the fs are functions of t, time.

A complication arises in estimation of standard errors. Because the coefficients are functions of time, their cross-product matrix coefficients do not all converge at the usual least squares rate equal to the sample size, and so the estimates of least squares standard errors will be divergent.

Accurate convergent standard errors could be calculated by working out the order of magnitude of the sums of the fs, and premultiplying the cross-product matrix by a suitable rebasing matrix. The precise forms of the sums may not be neat, but the orders of magnitude should be accessible without too much difficulty.

### Science news, globally and from Africa

Here is a website, Science Daily, with regularly updated news about all sorts of science. It is visually pleasing, and there is much science about the African continent. In addition to the usual suspects of HIV and malaria research, here are some other headlines:

Role Of Slave Trade In Evolution Of American Wild Rice Species

Sierra Leone: Collecting Health Data In Areas With No Power Supply

Unraveling Lion's Natural History Using Host And Virus Population Genomics

Role Of Slave Trade In Evolution Of American Wild Rice Species

Sierra Leone: Collecting Health Data In Areas With No Power Supply

Unraveling Lion's Natural History Using Host And Virus Population Genomics

## Sunday, 23 November 2008

### Good features in a national production function

What would be good features to have in an analytical function which expresses the behaviour of national production? Here are a few quick suggestions:

1. The main accumulating determinants of growth are included

Accumulating determinants are things like physical capital and education.

2. It allows for different elasticities of substitution

The elasticity of substitution measures how much of one input is substituted for another if the first input becomes better value. It measures market flexibility.

3. It allows for different income distributions

4. It allows for different innovation potentials

The Cobb-Douglas production function is widely used, and meets criterion one. The CES production function generalises Cobb-Douglas and meets criterion two. Further generalisations might meet the other criteria.

Once a production function starts getting really complex, it is probably time to move into a full macroeconomic model, since the assumptions and interactions can be spelt out more exactly.

1. The main accumulating determinants of growth are included

Accumulating determinants are things like physical capital and education.

2. It allows for different elasticities of substitution

The elasticity of substitution measures how much of one input is substituted for another if the first input becomes better value. It measures market flexibility.

3. It allows for different income distributions

4. It allows for different innovation potentials

The Cobb-Douglas production function is widely used, and meets criterion one. The CES production function generalises Cobb-Douglas and meets criterion two. Further generalisations might meet the other criteria.

Once a production function starts getting really complex, it is probably time to move into a full macroeconomic model, since the assumptions and interactions can be spelt out more exactly.

### The AR(1) model in technology transfers

The AR(1) model - that is, y(t) = a*y(t-1) + other terms + an error term - occurs in models of technology transfer between and within countries. It is also widely used in other areas of economics such as growth theory and pensions.

In technology transfer, it is known as the Gompertz model. The coefficient a measures diffusion speed, since the equation can be rewritten

y(t)-y(t-1) = (a-1)*y(t-1) + other terms + error,

so that the increase in y will be larger when a is larger.

As with growth theory, using a lagged term presents some problems and it would be a good idea to work towards finding more precise determinants which reduce its significance.

Given the ubiquity of the AR(1), I think I will write up my earlier work on its estimation with GMM. Some of it has been put on this blog.

In technology transfer, it is known as the Gompertz model. The coefficient a measures diffusion speed, since the equation can be rewritten

y(t)-y(t-1) = (a-1)*y(t-1) + other terms + error,

so that the increase in y will be larger when a is larger.

As with growth theory, using a lagged term presents some problems and it would be a good idea to work towards finding more precise determinants which reduce its significance.

Given the ubiquity of the AR(1), I think I will write up my earlier work on its estimation with GMM. Some of it has been put on this blog.

## Thursday, 20 November 2008

### Soviet economic performance and the production functions for growth

I read a paper this week on the economic performance of the Soviet Union during its history, entitled "Soviet Economic Decline: Historical and Republican data". It's available through Google Scholar, if you are interested. Some of my students from transition countries ask me about the USSR's performance relative to capitalism, so hopefully this will bolster my knowledge - I was a little ignorant as the USSR dissolved twenty years ago and many economists had dismissed communist economics long before and looked for more current challenges. Examining the USSR's performance also helps to answer questions about the origins of economic growth, and how much can be attributed to the different productive factors.

The authors use Western and Soviet data to find that Soviet growth declined from extremely high rates in the 1950s to low rates in the 1980s, despite high investment rates and education. The authors find that the physical capital stock grew steadily, but its return declined sharply, so that the Soviet physical capital to output ratio was very high by world standards in the 1980s. The estimates of productivity growth depends on the source of data, although it is at best low after the 1950s assuming a Cobb-Douglas production function. Industrial productivity growth remained positive until the 1980s, and non-industrial productivity growth was negative, most of all in agriculture.

The authors perform non-linear least squares estimations to get the parameters of a CES production function, replacing the Cobb-Douglas function to see if they get other explanations than declining productivity growth. They find, with three of the four datasets, that the CES parameter is low, meaning that labour and capital do not substitute easily for each other if their marginal returns change. The authors give a possible interpretation where capital accumulation would normally be replacing labour and so increasing its own return, but this was not happening in the USSR, perhaps because the type of capital was not labour-replacing. Other explanations are possible; for example, if capital was so unproductive in a market economy, more labour would be hired in its place, but Soviet labour markets were not flexible.

The authors emphasise the explanations of the CES production function estimations in preference to the competing explanation of the Cobb-Douglas function, and are probably right to do so given that Cobb-Douglas is a limiting case of the CES function. More generally in growth theory, the Cobb-Douglas function is widely used, but its implication that capital and labour freely substitute for each other is a major one given the significance of market rigidities in constraining growth, and the CES function may lead to different interpretations of growth performance. The authors themselves point out the relevance of their observations for Asian countries with rapid growth through capital accumulation (although the causes of their growth are debated).

The authors use Western and Soviet data to find that Soviet growth declined from extremely high rates in the 1950s to low rates in the 1980s, despite high investment rates and education. The authors find that the physical capital stock grew steadily, but its return declined sharply, so that the Soviet physical capital to output ratio was very high by world standards in the 1980s. The estimates of productivity growth depends on the source of data, although it is at best low after the 1950s assuming a Cobb-Douglas production function. Industrial productivity growth remained positive until the 1980s, and non-industrial productivity growth was negative, most of all in agriculture.

The authors perform non-linear least squares estimations to get the parameters of a CES production function, replacing the Cobb-Douglas function to see if they get other explanations than declining productivity growth. They find, with three of the four datasets, that the CES parameter is low, meaning that labour and capital do not substitute easily for each other if their marginal returns change. The authors give a possible interpretation where capital accumulation would normally be replacing labour and so increasing its own return, but this was not happening in the USSR, perhaps because the type of capital was not labour-replacing. Other explanations are possible; for example, if capital was so unproductive in a market economy, more labour would be hired in its place, but Soviet labour markets were not flexible.

The authors emphasise the explanations of the CES production function estimations in preference to the competing explanation of the Cobb-Douglas function, and are probably right to do so given that Cobb-Douglas is a limiting case of the CES function. More generally in growth theory, the Cobb-Douglas function is widely used, but its implication that capital and labour freely substitute for each other is a major one given the significance of market rigidities in constraining growth, and the CES function may lead to different interpretations of growth performance. The authors themselves point out the relevance of their observations for Asian countries with rapid growth through capital accumulation (although the causes of their growth are debated).

### The meaning of productivity

Productivity is a commonly used term among economic commentators, and even more frequently used in academic writings. Actually defining and interpreting it is difficult, however.

Here's a quick definition of labour productivity: it is national output divided by the size of the workforce. The definition is the obvious one, but does leave some questions open. If this quantity depends on labour itself, for instance if national output rises disproportionately quickly as the workforce gets larger, then to interpret the quantity, one may also want to specify the size of the workforce to express the full relation between workers and their production.

Then there's the question of interpretation. The phrase high labour productivity implies that workers are smarter or more diligent than those with lower productivity, and if they are, then productivity will usually be higher, other things being equal. However, economic output depends on many things besides the intrinsic qualities of workers, such as the amount of physical and financial capital in the economy. So a lazy worker in an advanced economy will generally have far higher productivity than a worker in a less advanced economy, because the advanced economy has greater non-labour productive assets.

To avoid some of the problems, economists have looked for a definition of productivity which does not depend on the common factors of production at all. If the economy is known to produce goods in such a way that output equals a constant times capital times labour force size, then we can divide output by capital times labour to get the constant, which we can label productivity. It measures the productivity of the productive factors, so that it could be considered to measure the productivity of the total economy.

There are some difficulties thrown up here, too. Economists do not know exactly what production function occurs in the economy, and any function used is an approximation. It may not even have the correct functional form. The constant will often not really be constant but depend on the factors of production, so this definition encounters the same problems of factor dependency discussed with labour. Since we are admitting that we only have estimates of the production function, and our productivity measure is inexact, we could have many different measures of productivity.

If we had a perfect knowledge of the production function in terms of the specified factors, then productivity would be interpreted as the contribution to growth of all productive factors which are not explicitly stated. Thus, once we include more productive factors, the economy's estimated productivity would vary.

Let's summarize. In commentary, productivity tells as much about the productive factors to which it doesn't refer as the ones it does, and in economic analysis, it measures how much we do not know about production.

Here's a quick definition of labour productivity: it is national output divided by the size of the workforce. The definition is the obvious one, but does leave some questions open. If this quantity depends on labour itself, for instance if national output rises disproportionately quickly as the workforce gets larger, then to interpret the quantity, one may also want to specify the size of the workforce to express the full relation between workers and their production.

Then there's the question of interpretation. The phrase high labour productivity implies that workers are smarter or more diligent than those with lower productivity, and if they are, then productivity will usually be higher, other things being equal. However, economic output depends on many things besides the intrinsic qualities of workers, such as the amount of physical and financial capital in the economy. So a lazy worker in an advanced economy will generally have far higher productivity than a worker in a less advanced economy, because the advanced economy has greater non-labour productive assets.

To avoid some of the problems, economists have looked for a definition of productivity which does not depend on the common factors of production at all. If the economy is known to produce goods in such a way that output equals a constant times capital times labour force size, then we can divide output by capital times labour to get the constant, which we can label productivity. It measures the productivity of the productive factors, so that it could be considered to measure the productivity of the total economy.

There are some difficulties thrown up here, too. Economists do not know exactly what production function occurs in the economy, and any function used is an approximation. It may not even have the correct functional form. The constant will often not really be constant but depend on the factors of production, so this definition encounters the same problems of factor dependency discussed with labour. Since we are admitting that we only have estimates of the production function, and our productivity measure is inexact, we could have many different measures of productivity.

If we had a perfect knowledge of the production function in terms of the specified factors, then productivity would be interpreted as the contribution to growth of all productive factors which are not explicitly stated. Thus, once we include more productive factors, the economy's estimated productivity would vary.

Let's summarize. In commentary, productivity tells as much about the productive factors to which it doesn't refer as the ones it does, and in economic analysis, it measures how much we do not know about production.

## Monday, 17 November 2008

### Is world aggregate demand sufficient for growth?

Is world aggregate demand sufficient to keep the world economy growing? As the world's industrial economies enter a downturn, and people there have less money to spend, the demand for goods from developing countries may not be enough to purchase all the goods produced in the world, and they may reduce production, leading to lower incomes in their countries and further reductions in demand and a worsening downturn.

The downward demand spiral can be analysed by looking at marginal propensities to consume. When developed country consumers were spending large proportions of their incomes, then many goods produced around the world could be sold relatively easily. As the consumers reduce their expenditures these goods are more difficult to sell, because purchasing power is owned by people with lower propensities to consume out of their income, like the very rich in developed countries, or people in high investment developed countries. If they scale back their production in response, then developed country consumers could have lower incomes from their own jobs, and so the situation has repeated with the same low aggregate demand propensities, except the economy has been shrunk by a certain percentage. The economy could keep on going like this, until there is no economy at all.

Of course, this shrinkage hasn't happened before because of government intervention to increase marginal propensities to consume (which is what borrowing to spend does), and also because as people get poorer they tend to spend more of their incomes on essentials like food. This final catch-net for the economy does not redistribute wealth back to the developing country consumers, at least initially, but rather increases the consumption propensity among low consumers.

Disaggregating aggregate demand helps to show whether the downward spiral could occur. Developing country exports to developed countries include articles like soft toys and other goods which would be considered luxuries in much of the world, so the risk of a gap between world demand and world supply of goods is increased.

Ignoring the exact composition of aggregate demand, the question can be restated more broadly and quantitatively as what global marginal propensity to consume will support optimal growth? The requirement is that investment is as high as possible consistent with maintaining its productivity and all goods produced being sold. The first part of the requirement is that the supply side of the economy makes as much as possible, and the second part is that the demand side wants it. I think that there are implicit assumptions about market operation and investment incentives built in to the requirement, but I haven't stated them.

The downward demand spiral can be analysed by looking at marginal propensities to consume. When developed country consumers were spending large proportions of their incomes, then many goods produced around the world could be sold relatively easily. As the consumers reduce their expenditures these goods are more difficult to sell, because purchasing power is owned by people with lower propensities to consume out of their income, like the very rich in developed countries, or people in high investment developed countries. If they scale back their production in response, then developed country consumers could have lower incomes from their own jobs, and so the situation has repeated with the same low aggregate demand propensities, except the economy has been shrunk by a certain percentage. The economy could keep on going like this, until there is no economy at all.

Of course, this shrinkage hasn't happened before because of government intervention to increase marginal propensities to consume (which is what borrowing to spend does), and also because as people get poorer they tend to spend more of their incomes on essentials like food. This final catch-net for the economy does not redistribute wealth back to the developing country consumers, at least initially, but rather increases the consumption propensity among low consumers.

Disaggregating aggregate demand helps to show whether the downward spiral could occur. Developing country exports to developed countries include articles like soft toys and other goods which would be considered luxuries in much of the world, so the risk of a gap between world demand and world supply of goods is increased.

Ignoring the exact composition of aggregate demand, the question can be restated more broadly and quantitatively as what global marginal propensity to consume will support optimal growth? The requirement is that investment is as high as possible consistent with maintaining its productivity and all goods produced being sold. The first part of the requirement is that the supply side of the economy makes as much as possible, and the second part is that the demand side wants it. I think that there are implicit assumptions about market operation and investment incentives built in to the requirement, but I haven't stated them.

## Thursday, 13 November 2008

### In praise of econometrics

Econometrics is the science which analyses economic quantities. Economic models are often abstract, and econometrics is the means of testing whether they bear any relation to the real world. Sometimes one reads about a glut of economists in developing countries and a shortage of scientists, and economics itself occasionally presents results saying much the same thing. Econometrics, with claims to be a science and having concrete applications, is a way of making economics more useful.

The basic methods of econometrics such as least squares analysis are used in disciplines such as biology and chemistry, where their use probably predates their use in economics. Although some methods of econometrics are specifically intended to economic application, much new research in econometrics has potential spillover benefits for other sciences as well.

So if someone can tolerate the maths, which is not overly demanding and can be as simple or advanced as you like, and wants to pick up transferable scientific skills, and wants to respond to the "glut of economists" charge - econometrics may be for them.

The basic methods of econometrics such as least squares analysis are used in disciplines such as biology and chemistry, where their use probably predates their use in economics. Although some methods of econometrics are specifically intended to economic application, much new research in econometrics has potential spillover benefits for other sciences as well.

So if someone can tolerate the maths, which is not overly demanding and can be as simple or advanced as you like, and wants to pick up transferable scientific skills, and wants to respond to the "glut of economists" charge - econometrics may be for them.

### Sexual equality in Burundi and Rwanda

Among the more unusual features of Burundi and Rwanda is women's prominence in public life. The countries had women in prime ministerial positions in the early 1990s, making them pioneers of female political participation in Africa, and women comprise half of the current Rwandan parliament, a world record. Their participation does not seem to be associated uniquely with any political party or donor pressure.

Private life, at least in Rwanda's capital Kigali and southern town Butare, also seems to be characterised by relative freedom for women, who are visible in trade and employment at junior and managerial level. Local and foreign females can be seen travelling alone in public transport and on the streets without evident continuous harassment, although I may have missed it. The relative sexual freedom in the countries also seems to be enjoyed by gays, who are reportedly not criminalised in Burundi unlike much of Africa. I am unsure about their status in Rwanda.

I do not know why sexual rights are relatively advanced in the two countries. Catholicism is the main religion, but one can walk around the cities without seeing any religious symbols at all, so perhaps religious strictures on women do not apply as strongly in the countries as elsewhere. They were historically occupied by German and then Belgian colonists, whose legal enforcement of their moral codes may have been less thoroughgoing than in British controlled lands, or they may have been less preoccupied with sexual matters.

Private life, at least in Rwanda's capital Kigali and southern town Butare, also seems to be characterised by relative freedom for women, who are visible in trade and employment at junior and managerial level. Local and foreign females can be seen travelling alone in public transport and on the streets without evident continuous harassment, although I may have missed it. The relative sexual freedom in the countries also seems to be enjoyed by gays, who are reportedly not criminalised in Burundi unlike much of Africa. I am unsure about their status in Rwanda.

I do not know why sexual rights are relatively advanced in the two countries. Catholicism is the main religion, but one can walk around the cities without seeing any religious symbols at all, so perhaps religious strictures on women do not apply as strongly in the countries as elsewhere. They were historically occupied by German and then Belgian colonists, whose legal enforcement of their moral codes may have been less thoroughgoing than in British controlled lands, or they may have been less preoccupied with sexual matters.

### Possible means of classifying estimation methods

My Monday post called for a classification of estimation and testing methods according to an exhaustive set of performance criteria. I thought a little about it in the last few days, and the following features of statistical analysis may help to make such a classification viable:

1. The sum of independent identically distributed variables tends to a normal distribution. This result is the Central Limit Theorem, and applies to more general sets of variables and random series.

2. Most estimation methods can be represented by the Generalized Method of Moments.

3. There are maybe half a dozen genuinely different ideas in mainstream econometrics, like minimisation of the expectation-observation gap, looking at patterns of residuals, and spectral analysis.

So the complete classification could be based on the limited number of combinations of these different features. The founding GMM proofs, which combine generalised estimation methods with asymptotic analysis of normally converged variables, are a step towards the goal - I have praised the GMM in past posts.

1. The sum of independent identically distributed variables tends to a normal distribution. This result is the Central Limit Theorem, and applies to more general sets of variables and random series.

2. Most estimation methods can be represented by the Generalized Method of Moments.

3. There are maybe half a dozen genuinely different ideas in mainstream econometrics, like minimisation of the expectation-observation gap, looking at patterns of residuals, and spectral analysis.

So the complete classification could be based on the limited number of combinations of these different features. The founding GMM proofs, which combine generalised estimation methods with asymptotic analysis of normally converged variables, are a step towards the goal - I have praised the GMM in past posts.

### IMF projections on African growth in the global economic downturn

The IMF has published its projections for Africa during the global economic downturn. It anticipates 5.5 percent growth this year, and 5.1 percent growth next year.

The rates are still quite high, although lower than developing Asia. The difference between previous IMF forecasts and current IMF forecasts is however much higher in Africa than anywhere else, with a decline of 0.6 percent this year and 1.2 percent next year. That is to say, Africa is expected to be affected worse by the downturn than anywhere else in absolute percentage terms.

The rates are still quite high, although lower than developing Asia. The difference between previous IMF forecasts and current IMF forecasts is however much higher in Africa than anywhere else, with a decline of 0.6 percent this year and 1.2 percent next year. That is to say, Africa is expected to be affected worse by the downturn than anywhere else in absolute percentage terms.

## Monday, 10 November 2008

### New European diplomacy in the DRC and Rwanda

The French and UK governments are acting in concert over the recent fighting in the Democratic Republic of Congo, with their foreign ministers touring the region in an attempt to bring diplomatic pressure to bear on the belligerents and other concerned parties, including Rwanda which has shared goals with one of warring groups. The show of European unity is noticeable as Rwanda has strongly aligned itself with the UK and against France to the extent of recently changing its national official language. It receives large financial support from the UK.

The tone of presentation in the UK media has also shifted, with Rwanda subject to more negative reporting than previously for its role in the conflict. There has been a relative suspension of criticism of France's regional role, whose acts in the 1994 civil war and ethnic slaughter in Rwanda has often been excoriated by UK commentators. Incidentally, the 1999 UN report into the 1994 events, discussing public statements at the United Nations, makes for a much less comfortable comparison between the UK and France, or even the UK and US.

The attention to the present DRC conflict, rather than the conflict itself, is the change in recent weeks. Hopefully the diplomacy will bring some results.

The tone of presentation in the UK media has also shifted, with Rwanda subject to more negative reporting than previously for its role in the conflict. There has been a relative suspension of criticism of France's regional role, whose acts in the 1994 civil war and ethnic slaughter in Rwanda has often been excoriated by UK commentators. Incidentally, the 1999 UN report into the 1994 events, discussing public statements at the United Nations, makes for a much less comfortable comparison between the UK and France, or even the UK and US.

The attention to the present DRC conflict, rather than the conflict itself, is the change in recent weeks. Hopefully the diplomacy will bring some results.

### Criteria for judging estimation procedures

There are many procedures for judging how good an estimation procedure is: bias, consistency, speed of convergence, behaviour in misspecification, applicability, ease of use, and so on. It would be nice if someone pooled the criteria values for recently emerged procedures like the GMM system and, even better, produced an overarching criteria which embedded the others including ones not yet devised. A similar pooling for the associated test statistics would be good, too.

### If your industrial policy is not getting international technology, it isn't working

The more I read about international technology transfer, the more important it seems for economic development. My weekend reading was a review of policies towards technology transfer in developing countries. Procedures like encouraging licensing of technologies to local firms, permitting foreign involvement in firms to fifty percent, and promoting joint ventures are plausibly good means of accelerating transfers to local firms subject to certain conditions, and have been widely practised in East Asian tiger economies.

The supporting empirical work, generally undertaken at company level, is still emerging and nuanced, but indicates that international involvement aimed at getting technology can increase productivity noticeably. The policy works best if local education is good, domestic companies can form to take advantage of spillovers, and intellectual property rights are strong enough that multinational companies are not frightened off.

Coupled with macroeconomic evidence, some posted here on Great Lakes Economics, that technology improvement affects growth to a similar degree as capital and educational accumulation, the message to developing countries is: if your industrial policy is not getting international technology, it isn't working.

The supporting empirical work, generally undertaken at company level, is still emerging and nuanced, but indicates that international involvement aimed at getting technology can increase productivity noticeably. The policy works best if local education is good, domestic companies can form to take advantage of spillovers, and intellectual property rights are strong enough that multinational companies are not frightened off.

Coupled with macroeconomic evidence, some posted here on Great Lakes Economics, that technology improvement affects growth to a similar degree as capital and educational accumulation, the message to developing countries is: if your industrial policy is not getting international technology, it isn't working.

### Testing for stability of the AR(1) autoregressive parameter across panel data groups

Previous posts have looked at the GMM estimation of the AR(1) process

y(i,t)=a(i)*y(i,t-1) + zero mean error

where i is a group indicator and t is time, and the estimation assumes that a(i) is a constant across groups. I showed that GMM estimators tend to estimate a value for

Testing for equality of subgroup

Here is a test which should work. Perform the GMM estimation, then estimate an AR(1) on the residuals for each group by OLS. Under the null of a(i) constancy, the residual AR(1) parameters should be asymptotically zero mean and normally distributed with an unbiased estimated correlation. Normalise to N(0,1), then sum their squares to get a chi squared distribution on i degrees of freedom. Reject a(i) constancy if the test statistic is too large at a set level of chi squared significance.

y(i,t)=a(i)*y(i,t-1) + zero mean error

where i is a group indicator and t is time, and the estimation assumes that a(i) is a constant across groups. I showed that GMM estimators tend to estimate a value for

*a*near the top of the a(i) range.Testing for equality of subgroup

*a*parameters using the Chow test is therefore misleading in that what is compared is two parameters near the top of each subgroup range. The Sargan and Hausman test may identify the misspecification, as they examine residual patterns, but as all groups are pooled in their testing (from memory) they may not be very powerful as some groups will exhibit positive serial autocorrelation and others will exhibit negative serial autocorrelation in their residuals, as previously shown on this site.Here is a test which should work. Perform the GMM estimation, then estimate an AR(1) on the residuals for each group by OLS. Under the null of a(i) constancy, the residual AR(1) parameters should be asymptotically zero mean and normally distributed with an unbiased estimated correlation. Normalise to N(0,1), then sum their squares to get a chi squared distribution on i degrees of freedom. Reject a(i) constancy if the test statistic is too large at a set level of chi squared significance.

## Thursday, 6 November 2008

## Monday, 3 November 2008

### Literature reviews - keeping them manageable

It can be overwhelming if, on preparing for a report or paper on a new topic, one looks up the available literature and finds several hundred or thousand papers on a similar theme. Here's what I do to keep the literature assessed relevant and manageable. It might have some merit.

Before I start I pose the basic research questions, and split the question several ways according to likely avenues of interest. Then I look at the probable structural contents of my research paper - recall that I previously mentioned that many papers take a rough form of

{Questions / importance / lit review / plan / theory / specification / empirical specification / inputs / outputs / interpretation / conclusion}

If the questions have been specified, then we may have a rough idea of the importance, theory, specification, empirical specification, and inputs, so these can be noted down in a sentence each.

Then comes the literature survey. Academics and students often have access to university libraries or online literature sources such as Science Direct or Google Scholar (the last being free), and these can be used to find relevant literature. Let's say that there are two hundred papers whose titles are a bit like the paper's title. We can immediately abandon those whose questions asked are entirely different from ours, and we can also abandon those which are not much the same as ours in the content of most of the structural parts. So if our paper is empirical with a specification, empirical method, inputs, and so on, we might want to jettison papers which are exclusively theoretical.

Hopefully, this will bring the size of the literature down to a reasonable size, numbering in the dozens. The contents of each of the papers can then be compared to each other, perhaps in a grid on paper or in one's mind against the structural elements. Many papers agree on all of the contents except for the data input used, for example, so these papers will show a high degree of similarity in the grid. The grid organisation which is observed after all the papers have been reviewed should lend itself to an overall literature review.

The above procedure is mechanical, and may replicate the more intuitive approach used when someone is completely familiar with the literature and so can draw up a literature review almost without thinking. The procedure also has an advantage that it encourages the author to learn new theoretical and analytical tools, if they repeatedly occur in the literature, which will be relatively few in number by construction.

Before I start I pose the basic research questions, and split the question several ways according to likely avenues of interest. Then I look at the probable structural contents of my research paper - recall that I previously mentioned that many papers take a rough form of

{Questions / importance / lit review / plan / theory / specification / empirical specification / inputs / outputs / interpretation / conclusion}

If the questions have been specified, then we may have a rough idea of the importance, theory, specification, empirical specification, and inputs, so these can be noted down in a sentence each.

Then comes the literature survey. Academics and students often have access to university libraries or online literature sources such as Science Direct or Google Scholar (the last being free), and these can be used to find relevant literature. Let's say that there are two hundred papers whose titles are a bit like the paper's title. We can immediately abandon those whose questions asked are entirely different from ours, and we can also abandon those which are not much the same as ours in the content of most of the structural parts. So if our paper is empirical with a specification, empirical method, inputs, and so on, we might want to jettison papers which are exclusively theoretical.

Hopefully, this will bring the size of the literature down to a reasonable size, numbering in the dozens. The contents of each of the papers can then be compared to each other, perhaps in a grid on paper or in one's mind against the structural elements. Many papers agree on all of the contents except for the data input used, for example, so these papers will show a high degree of similarity in the grid. The grid organisation which is observed after all the papers have been reviewed should lend itself to an overall literature review.

The above procedure is mechanical, and may replicate the more intuitive approach used when someone is completely familiar with the literature and so can draw up a literature review almost without thinking. The procedure also has an advantage that it encourages the author to learn new theoretical and analytical tools, if they repeatedly occur in the literature, which will be relatively few in number by construction.

### Technology diffusion literature

There is a large literature on technology diffusion across countries, and what factors facilitate it. Much of the empirical literature concentrates on microeconomic factors, that is to say, why individual companies adopt foreign technology. So the variables studied relate to perceived ease of adoption, perceived utility in use, and so on. Estimations often rely on surveys of firms. There is an apparently smaller literature on macroeconomic influences, but the work is still important, as governments find it easier to implement changes in interest rates, for example, than to effect cultural shifts in attitudes to risky new technologies.

### Macroeconomic determinants of technology diffusion

I am looking at the macroeconomic determinants - things like education and international factors - that influence the spread of technology to a country. There are indications from other people's and my own work that technology diffusion is as important for growth in developing countries as capital or educational accumulation.

I set up a preliminary model by looking for available macroeconomic proxies for likely determinants of transfer to a country, whether microeconomic or macroeconomic in origin. Here are some candidates:

- Lags in technology per capita in a leading technology country

- Lags in technology per capita in the country itself

- Lags in telephones or other reference technology per capita

- Lags in saving per capita

- Lags in education per capita

- First lag in GDP per capita

- First lag in interest rates

- First lag in openness

- First lag in government size

- Population

The first five variables may be lagged many times, so as to capture the effect of experience and exposure to the technology. Finding a decent panel data estimation method for multiple lags might be a problem however. A multiply lagged equation can be brought into the form of a first order AR(1) autoregression, and then vector versions of the main GMM estimators applied to it, but the conditions will be very stringent.

I set up a preliminary model by looking for available macroeconomic proxies for likely determinants of transfer to a country, whether microeconomic or macroeconomic in origin. Here are some candidates:

- Lags in technology per capita in a leading technology country

- Lags in technology per capita in the country itself

- Lags in telephones or other reference technology per capita

- Lags in saving per capita

- Lags in education per capita

- First lag in GDP per capita

- First lag in interest rates

- First lag in openness

- First lag in government size

- Population

The first five variables may be lagged many times, so as to capture the effect of experience and exposure to the technology. Finding a decent panel data estimation method for multiple lags might be a problem however. A multiply lagged equation can be brought into the form of a first order AR(1) autoregression, and then vector versions of the main GMM estimators applied to it, but the conditions will be very stringent.

Subscribe to:
Posts (Atom)