Tuesday 30 December 2008

Generalisation and linkages in econometrics

Growth econometrics may benefit if it attempted to link new research with past research more thoroughly than it does at present, and establish new abstract generalisations for existing research. I have written before on GLE about how there are a handful of overarching theories which together encompass most of econometrics.

I am thinking particularly about testing and estimation of the model Y = X.B + error where Y is a vector of current growth rates by year, X is a transposed vector of lagged determinants, and B is a coefficient vector. X can include constants and lagged terms from Y. There have been recent developments on testing whether the B term is constant across countries, and estimation and comparison of country values when the terms are not constant.

Enough already. I shall return to this theme, when the time is right.

Growth effects of technology specific to a capital-labour ratio

The paper Appropriate Technology and Growth by Basu and Weil presents a model whose outcomes seem to describe two of the main behaviours of country growth in the world economy. The first is that growth can be highly non-linear and subject to sudden accelerations and slowdown. The second is that countries can form into different convergence clubs, meaning that incomes or growth rates tend to converge for separate groups of countries, and that the rates in separate groups may not coincide.

The paper assumes that technology is specific to a particular ratio of national capital to national income, and that a country produces improved technology for a band around its ratio. So a late-coming country could save a great deal, reach the band of innovation for the world's technological and wealth leaders, and then surf along in their wake receiving all their technological benefits, even enjoying a higher level of consumption than them.

The paper is clean in its equations and logic, visually appealing, and reflects observed behaviour. Since I like it, here are some unsolicited comments about its scope.

The capital to income ratio could measure many things. In theoretical works, the assumption of capital to income ratios being the key determinant is tolerable; in empirics it could become a real headache.

The paper is retrospective, and it is easier to produce perfect models of past events than future ones. The original work proposing the importance of capital to labour ratios for technology was prospective and earned major prizes, I believe.

I am sure a limiting distribution could be found when large numbers of countries are used in the model, and this distribution could be used as an approximation for smaller numbers. I haven't produced it.

The assumption of effortless technological catch-up for countries free-riding on an innovator seems too optimistic - which would be acknowledged, I think - but also could give misleading theoretical predictions. Followers have to do work to adopt technology, which probably means high level of research and development, and in some circumstances any follower will fall behind a leading innovator no matter what are their own efforts at copying. The adjustment for imperfect transfers could be readily included in the model.

Oil price pricing and previous expectations

Earlier this year I said that the expected world oil price would be around US$140 a barrel. The price is now around $38 per barrel. Did I go wrong and how?

The figure was an expectation, not a prediction for the actual price. It is an average of all possibilities for the price, so the fact that the price is $38 does not necessarily mean that the expectation was wrong. That said, the expectation would have looked more likely to be correct if the year end figure was somewhere between $120 and $160 a barrel, since my expectation implies that $38 would be an extreme value and quite unlikely.

The expectation was presented in the context of market predictions that the price would hit $200 or $300 dollars in the near future. The theory I used showed that price slowing was more likely than continuous acceleration. Acceleration would be the result of oligopolistic supply decisions, rather than changes in oil demand. My expectation appears more realistic than the market anticipation at the time.

The expectation was presented as being based on economic fundamentals, rather than oligopolistic pricing or market overreaction to price stimulii. Given the possibility that the market has presently overshot and is pricing too low relative to economic fundamentals, it is quite possible that a more objective pricing on fundamentals would have a higher oil price. However, we may say that even if the fundamentals-based price is, say $70, the price is still unlikely to have occurred in a price distribution with $140 as the expected value.

In retrospect, the very rapid growth in prices over the last decade looks like a bubble in which actual prices detached from economic fundamentals. I should have evaluated the economic-fundamentals price as lower, by projecting the mid 1990s figure at a growth rate lower than the observed rate. The projection rate should still have been higher than previous trends, by virtue of the emergence of the large developing countries. In retrospect, I should also have assessed that the price bubble presented significant downside risk both from market correction and from damage to aggregate demand. An expectation of $80 or $90 would have been better, and I hope I would have said this even if the price today was $200 or $140.

Odd bits and a clear-out

As it is approaching the end of the year, I thought I would clear out some ideas that have been hanging around for a while. So here goes...

Sunday 28 December 2008

Two interesting ideas from astrophysics

I came across two interesting ideas from astrophysics in recent weeks. The first is that pressure affects gravity. Apparently it is a well known result from a century ago, but I didn't know it.

The second idea is that it could be possible to view what things looked like before the Big Bang created the current universe. The idea was implemented in an experiment, and is appealing in its directness: measure the distribution of the universe's distant energy and since the energy was generated shortly after the universe started, it may reflect the universe's condition at that time. The original paper is available here if you are interested.

Wednesday 24 December 2008

Diffusion graphs for internet technology

And here are the corresponding graphs for the number of internet users. There appears to be some slow down in all countries, despite the still low income in many countries.





Diffusion curves of computer technology

Here are the corresponding graphs for computer technology. The diffusion curves are not obviously S-shaped, and lower income countries do not seem to be further down the curves this time.





Diffusion curves of telecommunications technology

It is sometimes proposed in technology diffusion research that a technology's spread follows an S-curve - slow at first, then quicker, then slow again. Here are the curves for the telephone mainline and mobile phone subscriptions for three African states, one developed nation, and one rapidly developing Asian country. They are pretty much in line with predictions, with less developed countries lower down the S-curves. The kink in the middle of several curves may arise because telephone mainline technology was becoming saturated by the 1980s, and mobile phone technology then started its own individual S-curve.





Monday 22 December 2008

US limits military aid to countries with child soldiers

There is a report that the US government is to limit its military aid to governments whose armies have child soldiers in them. The bipartisan cosponsors of the bill have previously supported other measures aimed at reducing the worst excesses of conflict.

I am unsure how limited the limits are, but it could be a substantial measure as much US aid is linked to military assistance.

How can theory and empirics best serve policy?

Economics is a subject which aims for real world application. Clarifying the relation between theoretical analysis, empirical analysis, and policy may help to produce more useful research. Of interest here will be how understanding improves control of or response to economic events, and what the relative roles and potential are for theory and empirics.

We borrow the graphical representation from Classification and Regression Tree (CART) models to present a model for a real world relation between two economic quantities. Two nodes represent each of the quantities, and all their possible causal interactions are represented by paths connecting them. By interaction we mean a mechanism of covariation that is readily conceptually distinguished from other mechanisms. For example, if the first node represents domestic technology usage and the second node represents foreign technology usage, then one path may represent change in technology due to foreign direct investment and a second path may represent change in technology due to trade exposure. The paths may be split into sub-paths, with sub-nodes breaking up the path and representing quantities which causally vary as one of the end node quantities varies and whose variation leads to variation in the other end node quantity. Thus, the sub-paths represent chains of causality connecting the two variables. In the technology spread example, the sub-nodes along the foreign direct investment path may be {transfer to other companies in the economy beyond the immediate recipient of technology} then {skilled usage by all domestic companies} then {profitability of the operation} then {technology usage throughout the economy}. Further sub-nodes may be introduced if the mechanism of transfer is further deconstructed, such as introducing {the movement of people between foreign direct investing companies and the rest of the economy} as a sub-node between {the original transfer} and {the spread to other companies}.

The last three steps in the FDI path (skilled usage, profitability, and wide technology usage) are shared with the other path, capturing the effect of technology exposure through trade. Thus we have a graph showing the modelled interactions:



The upper path represents the FDI route for technology transfer. The lower path represents the trade path, and is less fully modelled.

In analysing such modelled graphs, a theory proposes the existence of a new path between two nodes, or the introduction of new nodes in an existing path. In the proposed technology graph, suggesting that licensing is also a means of technology transfer is a theory. Another theory is that trade exposure’s action on growth requires foreign importers to demand high production standards and a domestic knowledge base to adapt to the standards, introducing two new nodes in the lower path. The theories here would be more properly called hypotheses since they do not have to be correct, in which case the nodes would have no real world connection and the node quantity covariation is zero.

An empirical study tests the relation between quantities on two nodes, and may be informed by and test for the graph connections suggested by theory. It may test the relative strengths of two different paths between the nodes. If the empirical study tests the relation between quantities on two nodes separated by other sub-nodes, then an empirical test controlling for the sub-node quantities is removing the effect of the first node quantity acting along the sub-node paths, so that any robust remaining effect indicates the presence of other sub-paths between the two nodes.

Policy usually looks to control the variation in one node’s quantity in response to changes in other node quantities. It will often be indifferent to the precise paths taken from the control quantities to the response quantities. In the technology graph, a policymaker may wish to maximise the use of a technology in their economy by transferring it from abroad, and will use any means necessary to obtain the maximum throughput in the graph. Theory sometimes returns at this stage to specify decision rules by which the policymaker, having precisely stated outcome preferences, optimises the changes in control quantities. The decision rule is required for optimisation if all paths cannot simultaneously operate with their maximum response. In the technology graph, for example, increases in intellectual property rights protection may raise the level of investment but lower the ability of local companies to copy the technology.

Under these definitions, we can propose some best possible outcomes for policy from theoretical and empirical analysis. Theory could find new paths for a modelled situation and emphasise the most important ones. Empirics could show overall node covariation along multiple paths, show covariation along specified paths, compare covariation along different paths, and demonstrate how path covariation changes in response to path choices in the whole graph. Theory could also connect the entire analysis with policy by presenting means of exploiting the paths and proposing decision rules for changing node quantities that reflect policymaker objectives. A decision rule may be specific to the objective and not readily adaptable to alternative objectives, which in a competitive setting – for example in competition for the receipt of a particular technology – may give the policymaker an advantage in achieving their objectives over rivals with alternative objectives, at least until they can devise their own decision rule and compete on the basis of economic fundamentals.

Thursday 18 December 2008

Macromodel displaying cyclical contraction - part 1

Here is a small macromodel showing sustained spirals in output in response to changes in expenditure preferences, capital accumulation, and different market stickinesses, with a competitive or monopolistically competitive labour market. It is in jpg form to allow display of detailed equations and graphs, and can be made larger by clicking on the image.

Macromodel displaying cyclical contraction - part 2

Macromodel displaying cyclical contraction - part 3

Macromodel displaying cyclical contraction - part 4

Monday 15 December 2008

Linearising macroeconomic models

Linearisation of macroeconomic models is common in theoretical and empirical work, so it is worth considering when it is applicable.

A function f of some economic quantity x can be written as

f(x+e) = f(x) + e*f’(x) + O(e^2)

where the dash denotes differentiation and the O term is of order e^2. Linearisation assumes that e is small, so that the O term is negligible, or that there is no O term at all, so we can write

f(x+e) = f(x) + e*f’(x)

The truncation masks misspecified models for small variations of the data, since almost any model has a truncated form like this one. The linearization will not hold for larger variation and misspecified models. The linearization may not be much use unless we have a stable steady state, when small variations in x result in x returning to its initial value. If the model spec is correct, non-locally the linearization will not hold unless it happens to coincide with the full expansion and the O term is identically zero.

Either the quantities of interest are in a stable steady state which is often is probably not true in developing countries and frequently not true in developed countries either, so should be demonstrated but frequently isn’t. Or if the linearization coincides with the full expansion of f and so the O term can be neglected, then evidence of goodness of fit should be given across the full range of inputs, but again frequently isn’t.

How much economic activity happens? (Reprise)

The question came up once before on the blog. One way of answering it is to say that supply and demand will be equalised, so that the quantity of goods supplied is such that buyers and sellers agree on a common price. Equivalently, we could look at pricing in terms of marginal costs and marginal revenues. Or we could draw a supply and demand diagram, which would give more information because it shows the way in which adjustment to the equilibrium occurs.

None of the stated answers describes the full dynamics of the situation. A supply and demand diagram for example, together with knowledge about the presence of a market mechanism, does not tell us how long the adjustment will take. The set of supply and demand equations would have to be supplemented by an equation giving the market adjustment process over time when out of equilibrium, and the evolution of the equilibrium over time.

Compressing the set of equations into a compact form may yield analytical benefits. Representation in terms of maximisation or minimisation of an integral term seems promising, as it puts Lagrangian theory at our disposal.

Demonstrating the equivalence of the various forms of supply and demand interaction requires some algebraic microeconomic theory, but doesn’t appear intractable. To show the equivalence of integral minimisation to a supply and demand equalisation, it may also be possible to borrow a proof from physical science. The integral minimisation associated with energy conservation can be shown to be equivalent to the equalisation of motion forces and gravitational forces at each point in time, so the proof should translate.

Thursday 11 December 2008

Exchange rates, tradables and non-tradables

I am reading a recent review of evidence on exchange rate fluctuations and the links to prices of internationally tradable goods and non-tradable goods. The evidence shows that exchange rate changes are closely linked to the variations in both tradable and non-tradable good prices domestically. Earlier theorists proposed that only tradable good prices would be closely linked with exchange rate fluctuations, since exchange rates are determined in part by purchases of currencies in order to buy tradable goods, and evidently not purchases of non-tradable goods.

Not having reached the end of the review yet, I haven't read the author's proposed theoretical resolution of the difficulty, but it does seem a priori that the proposition of no link between non-tradable prices and exchange rates is theoretically clearly flawed. One would expect the prices of tradable goods and non-tradable goods to operate within bands relative to each other that are reasonably similar even in entirely separate economies if the production functions and inputs for the goods are similar. So in economies where the exchange rates are determined by tradable prices, they will also be linked to non-tradable prices.

The review may propose this mechanism too. I'll find out today. In theoretical work, the effort is often in the exact mathematical modelling rather than the basic ideas.

Modelling transaction costs with geographic distance

I mentioned in a recent post that one can often improve models by increasing the detail in their microfoundations. The case of transaction costs in output models illustrates my point.

Suppose that annual output is modelled by annual output = A*capital^B where A and B are constants. Then we could include transaction costs as annual output = A*capital^B + K for K a constant depending on the expense of transaction, presumably depending on distance between buyers and sellers of the goods.

We could specify more detail: output per transaction = A2*capital^B2 + K2 for A2, B2, K2 constants, and then sum over the number n of transactions per year to get annual output = n*A2*capital*B2 + n*K2. So as the number of transactions increases, the advantage of local trading increases, other things being equal.

The observation isn't very smart, but illustrates a point.

Environmental damage as analogous to aggregate demand externalities

I am looking at whether environmental damage can be modelled macroeconomically in the same way as aggregate demand externalities. A quick search did not show anyone having modelled environmental damage in exactly the way I am thinking, but probably someone has done something similar.

Aggregate demand externalities arise when people's and companies' individual actions alter the total demand for all goods in the economy. These actions may be modelled assuming that the people and companies are not individually affected by their decisions as much as the whole economy. The collective effect of all actions on everyone in the economy can be large, however.

The way a macromodel including environmental damage could be set up seems clear enough. People assume their individual actions do not have much effect, but collectively they do. So there is scope for governmental action, depending on the model parameters.

Guinea worm eradication

Cases of the disease Guinea worm have approached a new low, and there are plans to eradicate it fully, according to reports.

Thanks to the government and private sector funders of the initiative.

Monday 8 December 2008

Applications for MA Economic and Governmental Reform at the University of Westminster

Here's a reminder about applying and getting funded for the MA Economic and Governmental Reform at the University of Westminster.

I teach economics on a Master's course at the University of Westminster in London. The course title is MA Economic and Governmental Reform, and runs from September to September. We are presently recruiting for next year's course.

The course requirements are listed on its website (linked here), although there is some flexibility. Unavoidable ones are:

1. Reasonable English (or things won't make sense)
2. A first degree with some relevance to the topic, or a degree and relevant work experience
3. A job, or potential job, in government (people from NGOs have historically also performed well)
4. Willingness to work hard (or things will not be enjoyable)

African applicants are most welcome and have good performance records. Information on the course and obtaining funding is on the website. The course, like most in the UK, is expensive (£10,000), so students usually have applied for scholarships first. Early application is recommended.

Getting the most out of mining investment

Africa gets a small proportion of world investment, so it is important that the region derives the greatest possible advantage from its sizable mining investment. I will present here some suggestions to help with the task, as it concerns the spread of technology, a likely major factor in promoting economic growth.

Technology expansion theory distinguishes between technological spillovers, which are associated with increased local innovation following exposure to foreign technologies, and technological spread, which is increased local adoption of existing foreign technology. The latter is more relevant in Sub-Saharan Africa because of its lower levels of research and development expenditure. Included among the factors identified in studies as determining increased technological spread are: the local population’s exposure to foreign technology, its applicability to the rest of the economy, geographic or trading proximity of local firms to the technology operator, whether local firms can copy the technology without legal prosecution, whether local firms are competent enough to copy the technology, whether local firms have the managerial skills to implement the technology, and if there is sufficient domestic pressure to encourage copying or adaptation.

Many characteristics of mining investment do not tend to support technological spread through these mechanisms. The mining industry, above all of hydrocarbons, can have a small workforce with a high proportion of foreign managers, can have narrowly specific technology with a higher capital to labour ratio than the rest of the economy, may be geographically isolated or offshore, may be isolated by its security, may have few linkages with the rest of the economy, may have higher skill or experience requirements for operation than most local firms, and may operate in a local or international market with low competition.

These characteristics obstruct technology’s flow, but government decisions when negotiating contracts can help to remove the blockages. The following suggestions broadly correspond to each of the problems, although there is some overlap in their effects.

Local exposure to the mining industry’s technologies and procedures could be increased by requiring a reasonable proportion of local employees at every level of the company, from junior to senior managerial. Local participation does not require exact parity in remuneration, which will be determined by the operation of international and domestic markets separately, but in decision making and responsibility so that local employees have exposure to best international practice. The international company should have freedom of choice and training for their staff, within the parameters of selecting locals, since it is their knowledge and demands which are important for transferring skills. In addition, local exposure may be increased by requiring inspection of company equipment by government, university, and local company scientists, under the remit of national training programmes.

Further exposure may be encouraged by restrictions on the form foreign company participation takes. Foreign direct investment may be restricted, so that international companies may have to work through and with local partners, chosen by the international companies. A 50 percent partnership requirement may be a good way of exposing local participants to international expertise, standards, and demands.

Loose patent protection for mining technologies would reduce the risk and expense of copying technologies for local firms. Ordinarily, loose protection can be double-edged, since companies may be reluctant to invest in an economy at all if their intellectual property is threatened, or they may take measures to reduce its local exposure. Given the measures suggested above that deliberately expose local workers to their technology, copying becomes even more probable. However, in the case of mining investment, the typically low level of competition and high returns may make patent protection a secondary consideration.

In response to the risk of intellectual property loss, international companies may prefer to transfer older technologies which are not their most advanced or are not patent protected, but the choice may be advantageous for the receiving economy, since it may be more compatible with its overall development level. The adoption may be facilitated by local company investment, and by local research and development in the industry. Some studies indicate that much of the impact of local R&D is not through producing new goods – the finance for it is far below that in the rich economies – but in easing the transfer of existing knowledge.

Thursday 4 December 2008

Covering all routes to growth

Economists who study economic growth have not agreed on the exact causes of growth in developing countries. Some stress that it is factor accumulation which is important, so that countries grow rapidly because they invest and educate at high levels; others stress that it is knowledge transfer from developed countries which is important, where knowledge includes both theoretical studies and applied, management skills; still other economists say that both contribute to growth. The last group seems to be the largest among leading professional economists, and examinations of the interactions between technology and accumulation lead to most interesting conclusions.

The rapidly growing East Asia countries tended to accumulate at high levels, but also adopted investment, trade, and educational regimes which led to accelerated transfers of technical knowledge. So it did not matter which route to growth was correct; they had both covered.

Moment conditions for unbiased double-endogenous IV estimation

I mentioned last Thursday that it may be possible to find an unbiased estimate of a parameter using IV methods if two endogenous variables have a known relation in the bias they produce in individual IV estimates. Here are the moment conditions corresponding to the expectation and variance assumptions given last time for an error u and endogenous instruments v and w:

E((v-a*w)*u)=0
E(v^2*u^2)-E(vu)^2 = a^2*(E(w^2*u^2)-E(wu)^2)

where a is an unknown constant.

What is happening is, these two equations allow a reduction by one in the number of parameters in u to be estimated, since we have two new equations but have added only one extra parameter, a. The reduction in the number of parameters to be estimated offsets the contribution of the unknown bias parameter to the number of parameters, and means that the system is as identified as it would be without the bias.

I think this representation makes the direction of further generalisations clear.

Estimating the CES production function

The CES production function assumes that economic output is given by

Output = alpha*(beta*K^gamma + (1-beta)*L^gamma)^(1/gamma)

where the Greek letters are constants, K is capital, and L is labour. It is defined for gamma<>0.

When we want to estimate the parameters, we can use non-linear least squares methods, or we can transform the equation a little to get

Output^gamma = alpha^gamma*beta*K^gamma + alpha^gamma*(1-beta)*L^gamma

and estimate the equation by constrained least squares. We may even introduce a country specific term, although it is added to output^gamma rather than to output which is more usual.

Here are the results of a non-linear least squares estimation, using panel data for world countries over the Penn World Tables range, with five year groupings of data:

y = 34.7 [0.24] + (0.52 [0.01] * K ^ 0.08 [0.30] + (1-0.52) * L ^ 0.08)^(1/0.08)

p-values are in square brackets.

gamma is close to zero, which is the value at which CES behaviour becomes identical to that of the Cobb-Douglas function. This very rough estimation indicates that the frequently used Cobb-Douglas estimation might not be too bad as an approximation.

To improve the estimation, more complicated estimation methods could be used, but the toolbox is smaller than with the Cobb-Douglas function because the CES form is harder to handle.

Tuesday 2 December 2008

Maximising the speed and breadth of a technology’s spread

The discussion of multinational enterprises in the last post suggests their role in another often analysed phenomenon, the global spread of technology. Often the focus in the literature is on maximising knowledge spillovers from a company to local producers, so that the latter can produce as efficiently as the former. This mechanism is probably important in catch-up of incomes between countries. The underlying motivation is thus usually on increasing economic growth, but there are others. For example, international bodies may want to spread a technology as widely and quickly as possible if it is found to reduce the degree of global warming without lowering economic productivity.

If the spread of a technology is of interest in itself, then two major questions arise: whether the underlying technology is successful as a marketplace proposition, and how to maximise the spread of a technology given its viability. Evidently, if a technology is a commercial disaster, it is not going to get very far. Many papers on technology spread assume that a technology is already commercially viable, or use measures of local viability such as prices in the market. However, the available price data is not always reliable, so the default position is the first. It is reasonable to assume that a technology which has already been successful in one market, particularly an OECD technology leader where most formal R&D innovation occurs, is potentially commercially viable in other countries now or in their futures.

The second question has many aspects: would posting the design of the technology free of charge on the internet be most effective? Most companies could see it, but how would the innovation be encouraged in the first place? Perhaps innovators could be financially supported by their subsequent provision of the tacit knowledge required for the technology’s successful operation. But then, if the tacit knowledge is very large, the technology spread will be slow.

Alternatively the technology could be the property of a single multinational, and it could spread easily and efficiently through its subsidiaries. But then how would it get beyond the company? Other companies would have to innovate to copy the company, or the knowledge would have to leak from it. Both processes could be slow.

The most rapid spread may be from a large intergovernmental or philanthropic funded organisation creating a low-tacit knowledge technology, and presenting the technology free though publicly available or trade-based sources. Such efforts are probably infrequent in high-tech fields, although some parts of Internet and Web innovation may be partial examples. A possible commercial alternative might go something like this: a small high tech company establishes a niche in its domestic market, and licenses to foreign companies. Leakage of its ideas is very likely, as it will lack the financial resources to protect its intellectual property, but given its small size, licensing and export income may be sufficient to support domestic expansion or branding efforts. The business plan is risky, but in common with other start-ups, could lead to commensurate high returns.

The curious existence of multinational enterprises

I used to take the structure and presence of multinational enterprises for granted; recent readings have rid me of the prejudice. The literature on them points out that there is nothing inevitable about their existence. A company with a product to sell does not have to set up overseas, but could remain in its domestic market. Even given that profit maximisation might lead it to want to sell as widely as possible, it could export its products to the foreign market without physically being there, or it could sell licenses to a foreign company to manufacture its goods under its brands or using its technology.

The explanations for MNE existence turn on the high knowledge content of the goods they manufacture, where knowledge is understood either to be the explicit scientific, patentable content or the tacit content of a company’s productive organisation. The high knowledge content is a characterising feature of their production; some estimates have found that most of the world’s research and development is done by them.

Within the general recognition of the importance of MNE’s intellectual property, there are several competing explanations for their spread. One emphasises that companies wish to maintain tight control of their intellectual property, and local licensing would put it at risk. Another stresses that because of the high tacit content of goods, only the company itself can efficiently produce its own goods. A third underlines the role of trade barriers to exporting goods instead of local production, partially because countries wish to ensure knowledge spillovers.

Use of supply-demand graphs in economics research

Here are a few points on supply-demand graphs; it is not always obvious why they are favoured over algebraic solutions, so the points may help clarify.

1. Simultaneous equations expressing market supply-demand behaviour are often not solved explicitly, but by plotting in graphs (so showing how a market mechanism would adjust to obtain the equilibrium)
2. The effect of parameter variation is shown by shifts in graphs and then stated verbally
3. The effect of assumption variation is shown by shifts in graphs and then stated verbally
4. The verbal descriptions of effects on graphs are explicit (“rotates curves”, “shifts curves out”). The curves and the quantities and behaviour they describe are treated as synonymous (“the money supply grows shifting the demand curve to the right, which in turn increases profits”)

Algorithm generating theoretical economics papers

Here is an algorithm for generating the main sections of theoretical economics papers. It is an application of the earlier proposal for research paper structuring (also posted here), and has been tested against four recent famous economics papers.

1. Present a set of equations describing a situation (Whether a base situation or additional feature or adjusting feature)
2. Solve them
3. Deduce a property and discuss its implications (the property should arise by virtue of the base situation or its additional feature, and should concern an important topic)
4. Give a numerical calculation or table based on varied parameter values tried in the property equations
5. Highlight key variables used in the property
6. Describe the property’s relation to other properties in the paper or elsewhere.
7. Repeat from step 1 or step 3

Saturday 29 November 2008

Never enough microfoundations

My previous post suggests that declining returns to scale could be generated by a more fundamental productive process. Declining returns to scale would be considered as part of the microfoundations of many macromodels. The process of decomposing microfoundations into their own microfoundations could continue as far as you like, and the diversity of knowledge and its possible applied combinations would rise exponentially as a result. Economics I think should be treated as a science, and the same procedures of extensive and intensive analysis carried out to get the same benefits.

Offsetting diminishing returns

My last post on the potentially higher-than-exponential product diversity generated by scientific innovation (which could equally apply to productive process combination) suggests one possible route by which a productive factor might exhibit increasing returns to scale. Usually, it is assumed that factors exhibit diminishing returns through exponential contraction, but if the usual decline is offset by increasing product diversity then eventually the greater-than-exponential growth may dominate.

The argument for declining returns to scale often given is that the factor will have less of the other productive factors with which to work, so the output will fall off. There is enough empirical evidence to support the claim, and it is intuitively clear as well. But increasing product diversity could offset it, indicating that the rationale is not the most fundamental specification of productive interactions. A detailed microfounded production function embedding the two causes would help to clarify the situation and would also suggest as yet unknown influences on returns to scale.

Competing theories about technologically driven endogenous growth

I have indicated in the last few weeks the importance attached to technological innovation in economic growth. The ideas about how it occurs could also be used to describe the processes of academic research, so the theories should come quite naturally to a reflective analyst.

The most immediate theory about research views its characterising process as producing incrementally improved new products for the market which are subject to constant demand, while another theory emphasises the variety of capital goods produced by the market, while another related set of theories consider the characterising process to be the act of innovation as lowering the future cost of learning. The last two theories stress human knowledge as accumulative, in the same way as economic researchers attempt to learn from the work of other researchers or find short cuts in their past studies to speed up future production. The theories tend to recognise the abstract scientific elements of innovation, but stress more the applied elements where people have to invest a great deal of time in finding out how to work with the scientific processes.

The underlying nature of the science in scientific innovation is, I think, relatively downplayed. Science tends today to be intensive as much as extensive, trying to discover the underlying processes of known phenomena rather than looking for new ones. The result is that the number of possible scientific applications can be subject to exponential, or higher, increase. Consider, for example, if we know about the existence of the atom, and then are informed that atoms consist of protons, electrons, and neutrons. We had just one object for use before the information; now we have three. If the subatomic particles are themselves split up into three components, we now have 3 x 3 = 9 components. The number of combinations of inputs in the first instance is two, an atom or not; in the second instance, it is 2^3, electron or not, proton or not, neutron or not; in the third instance it is 2^(3^2). This greater-than-exponential growth occurs in chemistry and biology too, for example in the decomposition and rearrangement of DNA strands.

Thursday 27 November 2008

Technology's relation to capital accumulation

There is a debate in economics about how much economic growth is caused by accumulation of capital (physical or educational), and how much by accumulation of technological knowledge, and whether one requires the other to have an effect.

Much of the debate takes place in theoretical domains, because the data, when analysed with conventional estimation techniques, can often be interpreted in more than one way. To give an example, national output might be increasing because of improvements in knowledge with capital investment occurring in the same proportions because investment has higher profits as a result of the knowledge. Or it might be increasing because of capital accumulation, and skills build up as a result of increased capital. So we have data where knowledge, capital, and output are all rising, and can't be separated by usual studies of common international datasets.

TV program criticising the operation of aid

There was a UK television program this week criticising the operation of aid in Africa. You can find the web link here.

In my experience, many former aid workers criticise the operation of aid. Current aid workers tend to be less critical, oddly enough. The award for most critical analysis goes to former World Bank employees, who have often been very harsh in person and in print.

Unbiased IV estimation combining a endogenous variable and its endogenous lag

Here is a method of instrumenting a regression equation if there are no fully exogenous variables, but it is possible to make certain assumptions about the relation between the available endogenous variables. These assumptions are more likely to be met by lagged variables.

The unbiased estimation of the parameter B in the matrix equation Y = X.B + e presents difficulties if the available instruments W are correlated with the error term, since under IV estimation we have B(est) = B + E((W'X)^-1.(W'e)), and the expectation will be non-zero since E(W'e)<>0.

We may be able to find two instruments V and W such that

Var((V.X)^-1.V'e)=a^2.Var((W.X)^-1.W'e) + independent error

and

E((V.X)^-1.V'e)=a.E((W.X)^-1.W'e) + independent error.

The conditions say that the two instrumental variables are related in their behaviour relative to the error term, and may plausibly apply to W if it is the original variable X and V is one of its lags, when a would probably be expected to be less the unity. We can regress Var(V'e) on Var(W'e) to get an unbiased estimate of a. Then the first instrumental variable estimation using W on X gets E(B(est,W)) = B + delta where delta is the bias and E(B(est, V)) = B + a.delta, and we can calculate the bias as (B(est, V) - B(est, V))/(1-a(est)). This bias is asymptotically correct because of convergence in distribution of the numerator and in probability of the denominator. Thus, we can calculate the unbiased B as B(est, W) - bias(est).

Geometrically, the assumptions amount to allowing further projections of V on W beyond the usual IV ones. The assumptions no doubt could be weakened.

Monday 24 November 2008

Technology's S-curve and parameter standard errors

Technology is often proposed to spread slowly at first, then become more widely accepted at a faster rate, then slow down in its spread as most people become familiar with it or its utility declines. Graphically, it follows an S-curve over time, like this:



The S-curve may be modelled by fitting a time dependent function like the logistic function:

Technology use = a0/(1+exp{-(a1+a2*time)})

where the as are constants to be estimated. They can be estimated by non-linear least squares, which minimises the sum of (Observed values - predicted values)^2. The usual procedure is to approximate the predicted values by their Taylor series linearisation, or its numerical approximation, so we have to minimise the sum of

(Observed Values - b0 + a0*f1(t) + a1*f2(t) + a2*f3(t))^2

where the fs are functions of t, time.

A complication arises in estimation of standard errors. Because the coefficients are functions of time, their cross-product matrix coefficients do not all converge at the usual least squares rate equal to the sample size, and so the estimates of least squares standard errors will be divergent.

Accurate convergent standard errors could be calculated by working out the order of magnitude of the sums of the fs, and premultiplying the cross-product matrix by a suitable rebasing matrix. The precise forms of the sums may not be neat, but the orders of magnitude should be accessible without too much difficulty.

Science news, globally and from Africa

Here is a website, Science Daily, with regularly updated news about all sorts of science. It is visually pleasing, and there is much science about the African continent. In addition to the usual suspects of HIV and malaria research, here are some other headlines:

Role Of Slave Trade In Evolution Of American Wild Rice Species

Sierra Leone: Collecting Health Data In Areas With No Power Supply

Unraveling Lion's Natural History Using Host And Virus Population Genomics

Sunday 23 November 2008

Good features in a national production function

What would be good features to have in an analytical function which expresses the behaviour of national production? Here are a few quick suggestions:

1. The main accumulating determinants of growth are included
Accumulating determinants are things like physical capital and education.
2. It allows for different elasticities of substitution
The elasticity of substitution measures how much of one input is substituted for another if the first input becomes better value. It measures market flexibility.
3. It allows for different income distributions
4. It allows for different innovation potentials

The Cobb-Douglas production function is widely used, and meets criterion one. The CES production function generalises Cobb-Douglas and meets criterion two. Further generalisations might meet the other criteria.

Once a production function starts getting really complex, it is probably time to move into a full macroeconomic model, since the assumptions and interactions can be spelt out more exactly.

The AR(1) model in technology transfers

The AR(1) model - that is, y(t) = a*y(t-1) + other terms + an error term - occurs in models of technology transfer between and within countries. It is also widely used in other areas of economics such as growth theory and pensions.

In technology transfer, it is known as the Gompertz model. The coefficient a measures diffusion speed, since the equation can be rewritten

y(t)-y(t-1) = (a-1)*y(t-1) + other terms + error,

so that the increase in y will be larger when a is larger.

As with growth theory, using a lagged term presents some problems and it would be a good idea to work towards finding more precise determinants which reduce its significance.

Given the ubiquity of the AR(1), I think I will write up my earlier work on its estimation with GMM. Some of it has been put on this blog.

Thursday 20 November 2008

Soviet economic performance and the production functions for growth

I read a paper this week on the economic performance of the Soviet Union during its history, entitled "Soviet Economic Decline: Historical and Republican data". It's available through Google Scholar, if you are interested. Some of my students from transition countries ask me about the USSR's performance relative to capitalism, so hopefully this will bolster my knowledge - I was a little ignorant as the USSR dissolved twenty years ago and many economists had dismissed communist economics long before and looked for more current challenges. Examining the USSR's performance also helps to answer questions about the origins of economic growth, and how much can be attributed to the different productive factors.

The authors use Western and Soviet data to find that Soviet growth declined from extremely high rates in the 1950s to low rates in the 1980s, despite high investment rates and education. The authors find that the physical capital stock grew steadily, but its return declined sharply, so that the Soviet physical capital to output ratio was very high by world standards in the 1980s. The estimates of productivity growth depends on the source of data, although it is at best low after the 1950s assuming a Cobb-Douglas production function. Industrial productivity growth remained positive until the 1980s, and non-industrial productivity growth was negative, most of all in agriculture.

The authors perform non-linear least squares estimations to get the parameters of a CES production function, replacing the Cobb-Douglas function to see if they get other explanations than declining productivity growth. They find, with three of the four datasets, that the CES parameter is low, meaning that labour and capital do not substitute easily for each other if their marginal returns change. The authors give a possible interpretation where capital accumulation would normally be replacing labour and so increasing its own return, but this was not happening in the USSR, perhaps because the type of capital was not labour-replacing. Other explanations are possible; for example, if capital was so unproductive in a market economy, more labour would be hired in its place, but Soviet labour markets were not flexible.

The authors emphasise the explanations of the CES production function estimations in preference to the competing explanation of the Cobb-Douglas function, and are probably right to do so given that Cobb-Douglas is a limiting case of the CES function. More generally in growth theory, the Cobb-Douglas function is widely used, but its implication that capital and labour freely substitute for each other is a major one given the significance of market rigidities in constraining growth, and the CES function may lead to different interpretations of growth performance. The authors themselves point out the relevance of their observations for Asian countries with rapid growth through capital accumulation (although the causes of their growth are debated).

The meaning of productivity

Productivity is a commonly used term among economic commentators, and even more frequently used in academic writings. Actually defining and interpreting it is difficult, however.

Here's a quick definition of labour productivity: it is national output divided by the size of the workforce. The definition is the obvious one, but does leave some questions open. If this quantity depends on labour itself, for instance if national output rises disproportionately quickly as the workforce gets larger, then to interpret the quantity, one may also want to specify the size of the workforce to express the full relation between workers and their production.

Then there's the question of interpretation. The phrase high labour productivity implies that workers are smarter or more diligent than those with lower productivity, and if they are, then productivity will usually be higher, other things being equal. However, economic output depends on many things besides the intrinsic qualities of workers, such as the amount of physical and financial capital in the economy. So a lazy worker in an advanced economy will generally have far higher productivity than a worker in a less advanced economy, because the advanced economy has greater non-labour productive assets.

To avoid some of the problems, economists have looked for a definition of productivity which does not depend on the common factors of production at all. If the economy is known to produce goods in such a way that output equals a constant times capital times labour force size, then we can divide output by capital times labour to get the constant, which we can label productivity. It measures the productivity of the productive factors, so that it could be considered to measure the productivity of the total economy.

There are some difficulties thrown up here, too. Economists do not know exactly what production function occurs in the economy, and any function used is an approximation. It may not even have the correct functional form. The constant will often not really be constant but depend on the factors of production, so this definition encounters the same problems of factor dependency discussed with labour. Since we are admitting that we only have estimates of the production function, and our productivity measure is inexact, we could have many different measures of productivity.

If we had a perfect knowledge of the production function in terms of the specified factors, then productivity would be interpreted as the contribution to growth of all productive factors which are not explicitly stated. Thus, once we include more productive factors, the economy's estimated productivity would vary.

Let's summarize. In commentary, productivity tells as much about the productive factors to which it doesn't refer as the ones it does, and in economic analysis, it measures how much we do not know about production.

Monday 17 November 2008

Is world aggregate demand sufficient for growth?

Is world aggregate demand sufficient to keep the world economy growing? As the world's industrial economies enter a downturn, and people there have less money to spend, the demand for goods from developing countries may not be enough to purchase all the goods produced in the world, and they may reduce production, leading to lower incomes in their countries and further reductions in demand and a worsening downturn.

The downward demand spiral can be analysed by looking at marginal propensities to consume. When developed country consumers were spending large proportions of their incomes, then many goods produced around the world could be sold relatively easily. As the consumers reduce their expenditures these goods are more difficult to sell, because purchasing power is owned by people with lower propensities to consume out of their income, like the very rich in developed countries, or people in high investment developed countries. If they scale back their production in response, then developed country consumers could have lower incomes from their own jobs, and so the situation has repeated with the same low aggregate demand propensities, except the economy has been shrunk by a certain percentage. The economy could keep on going like this, until there is no economy at all.

Of course, this shrinkage hasn't happened before because of government intervention to increase marginal propensities to consume (which is what borrowing to spend does), and also because as people get poorer they tend to spend more of their incomes on essentials like food. This final catch-net for the economy does not redistribute wealth back to the developing country consumers, at least initially, but rather increases the consumption propensity among low consumers.

Disaggregating aggregate demand helps to show whether the downward spiral could occur. Developing country exports to developed countries include articles like soft toys and other goods which would be considered luxuries in much of the world, so the risk of a gap between world demand and world supply of goods is increased.

Ignoring the exact composition of aggregate demand, the question can be restated more broadly and quantitatively as what global marginal propensity to consume will support optimal growth? The requirement is that investment is as high as possible consistent with maintaining its productivity and all goods produced being sold. The first part of the requirement is that the supply side of the economy makes as much as possible, and the second part is that the demand side wants it. I think that there are implicit assumptions about market operation and investment incentives built in to the requirement, but I haven't stated them.

Thursday 13 November 2008

In praise of econometrics

Econometrics is the science which analyses economic quantities. Economic models are often abstract, and econometrics is the means of testing whether they bear any relation to the real world. Sometimes one reads about a glut of economists in developing countries and a shortage of scientists, and economics itself occasionally presents results saying much the same thing. Econometrics, with claims to be a science and having concrete applications, is a way of making economics more useful.

The basic methods of econometrics such as least squares analysis are used in disciplines such as biology and chemistry, where their use probably predates their use in economics. Although some methods of econometrics are specifically intended to economic application, much new research in econometrics has potential spillover benefits for other sciences as well.

So if someone can tolerate the maths, which is not overly demanding and can be as simple or advanced as you like, and wants to pick up transferable scientific skills, and wants to respond to the "glut of economists" charge - econometrics may be for them.

Sexual equality in Burundi and Rwanda

Among the more unusual features of Burundi and Rwanda is women's prominence in public life. The countries had women in prime ministerial positions in the early 1990s, making them pioneers of female political participation in Africa, and women comprise half of the current Rwandan parliament, a world record. Their participation does not seem to be associated uniquely with any political party or donor pressure.

Private life, at least in Rwanda's capital Kigali and southern town Butare, also seems to be characterised by relative freedom for women, who are visible in trade and employment at junior and managerial level. Local and foreign females can be seen travelling alone in public transport and on the streets without evident continuous harassment, although I may have missed it. The relative sexual freedom in the countries also seems to be enjoyed by gays, who are reportedly not criminalised in Burundi unlike much of Africa. I am unsure about their status in Rwanda.

I do not know why sexual rights are relatively advanced in the two countries. Catholicism is the main religion, but one can walk around the cities without seeing any religious symbols at all, so perhaps religious strictures on women do not apply as strongly in the countries as elsewhere. They were historically occupied by German and then Belgian colonists, whose legal enforcement of their moral codes may have been less thoroughgoing than in British controlled lands, or they may have been less preoccupied with sexual matters.

Possible means of classifying estimation methods

My Monday post called for a classification of estimation and testing methods according to an exhaustive set of performance criteria. I thought a little about it in the last few days, and the following features of statistical analysis may help to make such a classification viable:

1. The sum of independent identically distributed variables tends to a normal distribution. This result is the Central Limit Theorem, and applies to more general sets of variables and random series.
2. Most estimation methods can be represented by the Generalized Method of Moments.
3. There are maybe half a dozen genuinely different ideas in mainstream econometrics, like minimisation of the expectation-observation gap, looking at patterns of residuals, and spectral analysis.

So the complete classification could be based on the limited number of combinations of these different features. The founding GMM proofs, which combine generalised estimation methods with asymptotic analysis of normally converged variables, are a step towards the goal - I have praised the GMM in past posts.

IMF projections on African growth in the global economic downturn

The IMF has published its projections for Africa during the global economic downturn. It anticipates 5.5 percent growth this year, and 5.1 percent growth next year.

The rates are still quite high, although lower than developing Asia. The difference between previous IMF forecasts and current IMF forecasts is however much higher in Africa than anywhere else, with a decline of 0.6 percent this year and 1.2 percent next year. That is to say, Africa is expected to be affected worse by the downturn than anywhere else in absolute percentage terms.

Monday 10 November 2008

New European diplomacy in the DRC and Rwanda

The French and UK governments are acting in concert over the recent fighting in the Democratic Republic of Congo, with their foreign ministers touring the region in an attempt to bring diplomatic pressure to bear on the belligerents and other concerned parties, including Rwanda which has shared goals with one of warring groups. The show of European unity is noticeable as Rwanda has strongly aligned itself with the UK and against France to the extent of recently changing its national official language. It receives large financial support from the UK.

The tone of presentation in the UK media has also shifted, with Rwanda subject to more negative reporting than previously for its role in the conflict. There has been a relative suspension of criticism of France's regional role, whose acts in the 1994 civil war and ethnic slaughter in Rwanda has often been excoriated by UK commentators. Incidentally, the 1999 UN report into the 1994 events, discussing public statements at the United Nations, makes for a much less comfortable comparison between the UK and France, or even the UK and US.

The attention to the present DRC conflict, rather than the conflict itself, is the change in recent weeks. Hopefully the diplomacy will bring some results.

Criteria for judging estimation procedures

There are many procedures for judging how good an estimation procedure is: bias, consistency, speed of convergence, behaviour in misspecification, applicability, ease of use, and so on. It would be nice if someone pooled the criteria values for recently emerged procedures like the GMM system and, even better, produced an overarching criteria which embedded the others including ones not yet devised. A similar pooling for the associated test statistics would be good, too.

If your industrial policy is not getting international technology, it isn't working

The more I read about international technology transfer, the more important it seems for economic development. My weekend reading was a review of policies towards technology transfer in developing countries. Procedures like encouraging licensing of technologies to local firms, permitting foreign involvement in firms to fifty percent, and promoting joint ventures are plausibly good means of accelerating transfers to local firms subject to certain conditions, and have been widely practised in East Asian tiger economies.

The supporting empirical work, generally undertaken at company level, is still emerging and nuanced, but indicates that international involvement aimed at getting technology can increase productivity noticeably. The policy works best if local education is good, domestic companies can form to take advantage of spillovers, and intellectual property rights are strong enough that multinational companies are not frightened off.

Coupled with macroeconomic evidence, some posted here on Great Lakes Economics, that technology improvement affects growth to a similar degree as capital and educational accumulation, the message to developing countries is: if your industrial policy is not getting international technology, it isn't working.

Testing for stability of the AR(1) autoregressive parameter across panel data groups

Previous posts have looked at the GMM estimation of the AR(1) process

y(i,t)=a(i)*y(i,t-1) + zero mean error

where i is a group indicator and t is time, and the estimation assumes that a(i) is a constant across groups. I showed that GMM estimators tend to estimate a value for a near the top of the a(i) range.

Testing for equality of subgroup a parameters using the Chow test is therefore misleading in that what is compared is two parameters near the top of each subgroup range. The Sargan and Hausman test may identify the misspecification, as they examine residual patterns, but as all groups are pooled in their testing (from memory) they may not be very powerful as some groups will exhibit positive serial autocorrelation and others will exhibit negative serial autocorrelation in their residuals, as previously shown on this site.

Here is a test which should work. Perform the GMM estimation, then estimate an AR(1) on the residuals for each group by OLS. Under the null of a(i) constancy, the residual AR(1) parameters should be asymptotically zero mean and normally distributed with an unbiased estimated correlation. Normalise to N(0,1), then sum their squares to get a chi squared distribution on i degrees of freedom. Reject a(i) constancy if the test statistic is too large at a set level of chi squared significance.

Thursday 6 November 2008

Monday 3 November 2008

Literature reviews - keeping them manageable

It can be overwhelming if, on preparing for a report or paper on a new topic, one looks up the available literature and finds several hundred or thousand papers on a similar theme. Here's what I do to keep the literature assessed relevant and manageable. It might have some merit.

Before I start I pose the basic research questions, and split the question several ways according to likely avenues of interest. Then I look at the probable structural contents of my research paper - recall that I previously mentioned that many papers take a rough form of

{Questions / importance / lit review / plan / theory / specification / empirical specification / inputs / outputs / interpretation / conclusion}

If the questions have been specified, then we may have a rough idea of the importance, theory, specification, empirical specification, and inputs, so these can be noted down in a sentence each.

Then comes the literature survey. Academics and students often have access to university libraries or online literature sources such as Science Direct or Google Scholar (the last being free), and these can be used to find relevant literature. Let's say that there are two hundred papers whose titles are a bit like the paper's title. We can immediately abandon those whose questions asked are entirely different from ours, and we can also abandon those which are not much the same as ours in the content of most of the structural parts. So if our paper is empirical with a specification, empirical method, inputs, and so on, we might want to jettison papers which are exclusively theoretical.

Hopefully, this will bring the size of the literature down to a reasonable size, numbering in the dozens. The contents of each of the papers can then be compared to each other, perhaps in a grid on paper or in one's mind against the structural elements. Many papers agree on all of the contents except for the data input used, for example, so these papers will show a high degree of similarity in the grid. The grid organisation which is observed after all the papers have been reviewed should lend itself to an overall literature review.

The above procedure is mechanical, and may replicate the more intuitive approach used when someone is completely familiar with the literature and so can draw up a literature review almost without thinking. The procedure also has an advantage that it encourages the author to learn new theoretical and analytical tools, if they repeatedly occur in the literature, which will be relatively few in number by construction.

Technology diffusion literature

There is a large literature on technology diffusion across countries, and what factors facilitate it. Much of the empirical literature concentrates on microeconomic factors, that is to say, why individual companies adopt foreign technology. So the variables studied relate to perceived ease of adoption, perceived utility in use, and so on. Estimations often rely on surveys of firms. There is an apparently smaller literature on macroeconomic influences, but the work is still important, as governments find it easier to implement changes in interest rates, for example, than to effect cultural shifts in attitudes to risky new technologies.

Macroeconomic determinants of technology diffusion

I am looking at the macroeconomic determinants - things like education and international factors - that influence the spread of technology to a country. There are indications from other people's and my own work that technology diffusion is as important for growth in developing countries as capital or educational accumulation.

I set up a preliminary model by looking for available macroeconomic proxies for likely determinants of transfer to a country, whether microeconomic or macroeconomic in origin. Here are some candidates:

- Lags in technology per capita in a leading technology country
- Lags in technology per capita in the country itself
- Lags in telephones or other reference technology per capita
- Lags in saving per capita
- Lags in education per capita
- First lag in GDP per capita
- First lag in interest rates
- First lag in openness
- First lag in government size
- Population

The first five variables may be lagged many times, so as to capture the effect of experience and exposure to the technology. Finding a decent panel data estimation method for multiple lags might be a problem however. A multiply lagged equation can be brought into the form of a first order AR(1) autoregression, and then vector versions of the main GMM estimators applied to it, but the conditions will be very stringent.

Thursday 30 October 2008

MA teaching next year

I teach economics on a Master's course at the University of Westminster in London. The course title is MA Economic and Governmental Reform, and runs from September to September. We are presently recruiting for next year's course.

The course requirements are listed on its website, although there is some flexibility. Unavoidable ones are:

1. Reasonable English (or things won't make sense)
2. A first degree with some relevance to the topic, or a degree and relevant work experience
3. A job, or potential job, in government (people from NGOs have historically also performed well)
4. Willingness to work hard (or things will not be enjoyable)

African applicants are most welcome and have good course records. Information on the course and obtaining funding is on the website. The course, like most in the UK, is expensive (£10,000), so students usually have applied for scholarships first. Early application is recommended.

Bretton Woods revisions

There are plans for revision of the Bretton Woods institutions (the IMF and World Bank) following the difficulties encountered by developed countries in the recent credit crunch. A conference is planned for the end of the year, I believe.

The Bretton Woods institutions are actually today subject to less criticism than in the past. The senior personnel is less controversial and more technocratic, their predictions have been lauded, and they have acted on - perhaps superficially - many of the criticisms lobbed in their direction for the last two decades.

The difference between the international response to developing country problems in the 1980s and developed countries problems today is marked. In the 1980s, the Bretton Woods institutions were inflexible and developing countries changed their policies; today, developed countries get the Bretton Woods institutions to change. It doesn't seem to me that developing countries were less or more responsible for their predicaments than developed countries are today. The difference lies in the power balance.

Ever thus, I suppose, and probably the 1940s BW structures are obsolescent, but the difference between then and now is still illustrative.

A twin to Feldstein-Horioka?

The Feldstein-Horioka paradox is that savings rates and investment rates in a country tend to be correlated even if the country has totally open borders for capital flows. One would expect capital to move to where the highest rates of return are, so why should there be a strong relation between national savings and investment in the country? The debate was active in the 1980s, and various papers have explained it to a degree, but I do not know whether it was ever fully answered or people just moved on leaving it incompletely resolved.

I recently came across an analogous empirical observation in open economy macroeconomics, where international exporters tend to price their goods in foreign currency when exporting. It is not obvious whether they would do so, as they may prefer to price in their own domestic currency and accept fluctuations in demand rather than fluctuations in exchange rate returns from foreign currency pricing, and there has been some theoretical debate as to which would be more logical. Published empirical observation indicates, as it does with Feldstein-Horioka, that national considerations predominate and foreign currency pricing applies.

The evidence is preliminary, but still fascinating in terms of describing the continued importance of the nation state, which often works uneasily with capitalism's operation.

Environment and its analogy to money

I mentioned a while back that environmental economic analysis should be as sophisticated and innovative as the Keynesian or monetarist revolutions had been. I am dissatisfied with approaches that work out the costs of future environmental damage and calculate its discounted value, because they seem inadequate to analyse or respond to the problem of global warming in particular, which could threaten life and civilisations.

Reading the sales pitch for the WWF Living Planet Report yesterday (here - downloading the full report into browsers instead of saving seems to lock them up, so beware), I came across the phrase

"[the possibility of a] financial recession pales in comparison to the looming ecological credit crunch"

which is true, to such an extent that the effects of recession are scarcely of the same expected order of magnitude. But the economic implications of the phrase - which seems rhetorical, rather than analytical - made me sit up.

Money is not a good like other goods, because it is present in almost all markets for goods. When a supply and demand diagram is drawn for, say, peanuts, the missing variable which is implicitly used but not explicitly included is money. No modern macroeconomic model could exclude money determination alongside and separate from aggregate good determination. Its properties are different, and critical for market operation.

So it is with the environment, most clearly through global warming. The effects of global warming will pervade almost every future market in goods, and it should be analysed on a multiple simultaneous approach to macroeconomics alongside aggregate goods and money. I do not know what its properties are, but its economic characterisation is certainly distinct from the other two. The recognition of its permeation is a first step towards finding the characterisation.

I think that the abstract representation is that goods like environmental use lie at the base of the economic pyramid on which everything else is built, and they move upwards to support production and consumption in other goods. The manner in which they do so is idiosyncratic to the good, so carbon dioxide release is generated and affects other production differently from other base goods like money, water, or oil. Oh, to be able to carry this analysis through!

Sargan and Hausman tests in growth models

I have a low success rate in finding good instruments to use in the GMM system and GMM difference estimation methods for growth models. These methods assume that some equations connect the parameters and observed values in a model. Even if the data seems to have a reasonable fit to a model, the equations are more demanding than just a good visual fit, and subsequent testing can reject the application of the method to the model. It is important to note, as I have in recent posts, that what is being tested is not just whether the model is acceptable, but whether the method conditions and model jointly apply.

Why bother with the methods, if they impose extra conditions which are not intrinsically in the model? Well, less demanding methods such as OLS may incorrectly estimate the parameters, so even though the model is OK according to the method and the method is applicable, the output is not good. What would be best is a method with low intrinsic demands in addition to the model, and which produces accurate results, but for growth models the method does not seem to have been devised yet.

And so to instruments. These are observed data used in the equations alongside the parameters, and can be just about anything. We can test whether the instruments are satisfying the equations, but generally I find that they are unlikely to satisfy them. The rejection is probabilistic; any model can generate data which could satisfy the equations, but for most models the chances of getting the data by chance is extremely remote. I try to ensure that the instruments would be compatible with the data around 90 percent of the time. Common tests are known as the Sargan and Hausman tests.

The problem is frequent among researchers. The theoretical literature reports how difficult it is to find instruments which fulfil the conditions, and many applied researchers avoiding reporting the Sargan or Hausman tests at all when using the GMM estimators. I looked at two of the few empirical growth research papers which report in full their statistics, and which discuss in depth their instrument selection. Neither of them ensure, even across a small number of specifications, that the 90 percent condition is met; in fact even a 95 percent condition is not met.

The rejection indicates misspecification of the model-method. It is not surprising, as the underlying growth models and method conditions are linearisations of the true, complex, and probably unknowable economic generators. So I think that Sargan and Hausman rejection at 90 percent is not the end of model-methods.

Monday 27 October 2008

ANC potential split

Several high profile members of the ANC, the ruling party in South Africa, have seceded from the party and are looking at setting up a new political opposition. The events follow internal disputes which saw the replacement of its leader and president, and the resignation of much of the cabinet.

The split may be viewed in terms of a division which often happens in modern capitalist economies, where two major parties form that are broadly aligned with the right and left sides of the ideological divide in their countries. The division occurs in all major Western powers, for example, and even frequently occurs in state capitalist, directed economies.

Although there are some exceptions, many of the secessionists and critics are identified with more pro-business wing of the party, and many of the loyalists are identified with the more pro-distribution wing, all relative to South African norms. The events may be interpreted through personality clashes or tribal politics or responses to corruption, all of which have happened in recent times in South Africa. But the persistence of the right-left split across so many countries suggests that another occurrence in South Africa is at least partially driven by the same factors as elsewhere.

Here are two final observations. First, economic influence on political structures is always impressive to observe; second, the outcome would be helpful for South African democracy and help to calm the fears of many of its citizens.

The natural integration of African economies

The leaders of three trading African blocs have agreed on a mutual free trade zone last Wednesday in Kampala. They have also agreed to work towards further economic integration in the near future.

The integration is a natural outcome of Africa's development. As incomes rise, the demand for goods beyond subsistence production rises, and regional production offers lower travel costs than more distant production. Sufficiently large demand encourages international specialisation and consequent lowered costs. More stability in politics decreases the economically hazardous effects of international conflict. Trade integration gives increased weight in negotiations at world trade bodies.

Thursday 23 October 2008

Instrument selection is an extra modelling equation

I mentioned in a September post that when a growth model is prepared, a well-designed empirical estimation method can still find adequate estimation coefficients despite the incompleteness of the growth model. In a sense, the model is the equations plus the estimation method.

Here is another example. Instrumental variable estimation is a way of avoiding biases in estimation if the determinant variables are correlated with the error term. Ordinary Least Squares estimation gives biased estimates of a in the equation y=a.x+error if x is correlated with the error term.

OLS can be characterised in terms of the orthogonality condition E(x.error)=0. Then the estimate of a is x.y/x.x, and inserting the value of y and taking expectations shows this formula is unbiased if the orthogonality condition holds. The corresponding orthogonality condition for instrumental variables is that E(v.error)=0 where v is the instrumental variable, and the estimator is v.y/v.x.

The orthogonality approach is neat, but it is also helpful to consider the structure of the estimator as the projection of y on v divided by the projection of x on v. The derivation of instrumental variable estimators is given in many econometrics textbooks, often in terms of estimation when simultaneous equations give rise to the single observed relation. This happens when two variables can interact in more than one way, and the error term becomes correlated with the observed determinant variable.

Selection of an instrumental variable gets rid of the bias and selects one form of interaction between the variables, that which acts through the instrumented variable. So changing instruments can not only alter the biases on an estimated coefficient, but also change what is being modelled. It is like introducing a new equation into the model.