Thinking

Testing Krugman’s It Theory of Global Polarization

Krugman’s paying attention to global polarization again. His It theory is a sort of zero-one law of modernization:

One thing is clear: at any given time, not all countries have that mysterious “it” that lets them make effective use of the backlog of advanced technology developed since the Industrial Revolution. … Once a country acquires It, growth can be rapid, precisely because best practice is so far ahead of where the country starts. And because the frontier keeps moving out, countries that get It keep growing faster. … The It theory also, I’d argue, explains the U-shaped relationship Subramanian et al find between GDP per capita and growth, in which middle-income countries grow faster than either poor or rich countries. Countries that are still very poor are countries that haven’t got It; countries that are already rich are already at the technological frontier, limiting the space for rapid growth. In between are countries that acquired It not too long ago, which has vaulted them into middle-income status, but are able to grow very fast by moving toward the frontier.… and rising inequality within Western countries means that if you look at the global distribution of household incomes, you get Branko’s elephant chart.

The It theory implies that the rate of growth of per capita income is a quadratic function of per capita income since middle income countries who have It ought to grow faster than low income nations who don’t have It, as well as high income nations who, although they do have It, are too close to the technological frontier to grow rapidly. This has a certain plausibility since middle income countries seem to be the fastest growers while advanced economies and the least developed nations tend to grow slowly. Let’s test it to see if it accords with empirical reality.

If the It theory holds then the coefficients in the linear regression of growth rates as a quadratic function of log per capita income should be significant and bear the right signs. More precisely, the quadratic function has to be concave down (a hump shape), so that the linear coefficient ought to be positive and the coefficient of the quadratic term ought to be negative. Is it true?

We test this prediction against the Maddison dataset. We begin by running rolling regressions of the 5-year moving average of the rate of growth of per capita income as a quadratic function of log per capita income. Figure 1 displays the t-Stats (the ratio of the coefficient and its standard error) over time. Interestingly, Krugman’s It theory seems to hold for two periods, 1973-1986 and 2003-2012. But it seems to reverse in the 1990s. Is this because of the Soviet collapse?

It_test.png

To check this, we exclude the countries of the former Soviet Union and rerun our regressions. Figure 2 displays the estimates. That attenuates the problem. The nineties reversal is no longer statistically significant. Yet the overall pattern is unchanged. Why were middle income countries growing significantly faster than low and high income nations in these two periods but not otherwise?

It_test_exUSSR.pngThe next figure displays the R^2 of the regressions. We see that the relationship really breaks down in 1986-2002. Why? We must dig deeper.

rsrds.png

In order to get to the bottom of this we fit a linear mixed effects model. We restrict the sample to the past 50 years and fit the model,

\text{growth}_{i,t}=\alpha+\beta_1\text{PCGDP}_{i,t}+\beta_2\text{PCGDP}^2_{i,t}+u_i+v_t+\varepsilon_{i,t},

where we allow for random effects for country (u_i) and year (v_t). The results are pretty robust. With t-stats close to 10, both fixed-effect coefficients are extremely significant and bear the right sign.

lme

Do the results continue to hold if we introduce fixed-effects for income group (“class”), ie dummies for low, middle, and high income groups? For if the coefficients remain significant and continue bearing the right sign despite the inclusion of dummies for income group, that would imply that the pattern extends within income groups.

lme_fe

The results are very interesting. Instead of attenuating, the quadratic coefficients increase slightly in magnitude. But the fixed-effect coefficients for low income group (“class_3”) and especially, middle income group (“class_2”) bear the wrong sign. This could be because we are controlling for income (and squared income). But that is not the case. Even dropping the income variables, and with and without random effects, the coefficients of the low and middle income group dummies remain resolutely negative and significant. Indeed, high income nations averaged a growth rate of 2.15% over the past fifty years, compared to 1.71% for the middle income group and only 0.72% for the low income group. So it seems that our intuition was wrong.

Krugman appears to be right on average if we stick to the past half century. There is indeed a statistically significant cross-sectional relationship between income and growth whereby growth rates are a quadratic (and concave upwards) function of per capita income. But the reality is far more complex than that simple picture would suggest. The next graph shows the time-variation in the quadratic relationship since 1800. There is substantial systematic time-variation in the relationship since it emerged. Moreover, we can see how truly novel this catch-up business is. The series looks stationary around zero all the way until the 1970s. In English, there was hardly any catch-up—more precisely, excess middle income growth—before the last quarter of the twentieth century. Even the two brief periods of relative convergence bookend a decade of major divergence. The graph thus testifies to both the late arrival of convergence and the failure of the mid-century dream of Modernization.

lrit.png

The figure incorrectly states that the sample is restricted to the low income group. It is not. This is the full sample. 

The big question thrown up by the present investigation is the dramatic pattern of convergence, divergence, and convergence over the past two generations. What explains this pattern? Could it have something to do with the global financial cycle? The next figure displays the global financial cycle from Farooqui (2016).  It is also double-humped like the graphs above (also reproduced below), but the two seem to be out of sync with each other. While the financial boom of the late-1980s was gathering pace, the middle income premium in growth was collapsing. The second cycle is more congruent. So the evidence for a connection to the global financial cycle is mixed.

fincyclersrds

This requires more work. But we have the basic picture for now.


Postscript. I like scatterplots. It lets you quickly examine the strength of any hypothesized relationship. So I looked at actual average growth rates in the modern period versus that predicted by the fitted model. The fitted model is,

\text{growth}=-0.3028+0.0710\times log(PCGDP)_{i,t}-0.0039\times log(PCGDP)_{i,t}^2.

We do a sleight of hand and compare the predicted growth rate not with the actual growth rates by country-year. Instead we look at average growth rates in the modern era. The evidence that emerges is very strong. The quadratic model does a good job of predicting average rates of growth. Here we compare the predictions with the average rates of growth achieved since 1960. The estimated correlation coefficient is very significant (r=0.308, p<0.001). The picture is similar if we start the clock in 1950, 1970, 1980, 1990, or 2000. Krugman is really onto something.

actual_predicted.png

Post-postscript. Actually, it is not so simple. The even simpler Biblical model does even better. [Matthew 13:12. For whosoever hath, to him shall be given, and he shall have more abundance: but whosoever hath not, from him shall be taken away even that he hath.]

Matthew_model.png

Advertisements
Standard
Thinking

An Irredeemable Miscalculation?

obama-mbs-gcc-us-saudi-salman

The grim details of the extrajudicial torture-execution of the Post columnist are now clear. The most astonishing fact to emerge is that MBS did not even bother to cover his tracks. Instead of sending expendable killers-for-hire to maintain plausible deniability, the standard operating procedure for kinetic intelligence operations, MBS sent men from his own personal security detail to assassinate the journalist.

2017 imprisoned

Journalists imprisoned. Source: CPJ.

Killing journalists is not a new trick for mafia states. On average a journalist is killed every week, according to the Committee to Protect Journalists. Some 44 have already been killed this year, although that number does not include Khashoggi.

Jkilled.png

Of the 1,323 journalists reported killed since 1992, 912 have been killed since the Iraq War began in 2003; including 159 Iraqi journalists and 110 Syrians. Outside these two warzones, the greatest rate of scribe killing is in the Philippines, followed by Algeria, Pakistan, Russia, Somalia, Colombia, India, Mexico, and Brazil. The next graph shows the notorious double-digits scorers in the CPJ database.

JK3.png

We can think of the number of scribes killed as measuring to what degree the state formation approximates a mafia state. This is not exactly right because the 1,323 total scribe killings since 1992 include some 131 involving criminal gangs, as when the Chicago mafia gets rid of a pesky reporter with the understanding of the local judge and politician. More generally, only 1,083 killings can be attributed to specific actors. The next table shows the breakdown. It’s clear that the vast bulk of the killings are ordered by political or military authorities.

Attributed killings of journalists.
Killer Number Share
Government 194 18%
Military 242 22%
Political 436 40%
Paramilitary 62 6%
Mob 14 1%
Criminal 135 12%
Total 1,083 100%
Source: CPJ.

So it cannot possibly be the case that the prestige media is up in arms about the mere killing of a journalist. Why then does it look like MBS has his feet to the fire? And why does it look like his days are numbered? Are they? What precisely can be done about MBS? and more generally, the Saudis?

What the extraordinary opprobrium in the Western press reflects, I think, is Said’s Orientalism in reverse. What was so egregious about the Khashoggi business was not that he was a scribe. It was that he was a columnist at the Washington Post. Indeed, he probably thought that his association with a prestige paper in the United States made him bulletproof. Put bluntly, the implicit norm said: You can kill a journalist in your backyard, especially one writing in the vernacular. And if you’re going to kill a prominent critic who is known internationally, you better have plausible deniability. But you can’t kill a card-carrying member of the Western Press, especially not without even a fig leaf of deniability. So MBS miscalculated badly. It may be because he has always lived in the Kingdom and has no first-hand experience of the rigidity of the liberal-democratic discourse in Western civil society.

Now what? What is the West to do with MBS? Must he go? Oh it can be arranged. And if you are thinking of European dependence on gulf crude, there are operational solutions to the problem of stabilizing oil prices. There is no need to occupy Saudi cities and the vast bulk of Saudi territory. All you need is to secure a small portion of the eastern province. US forces can secure the oil fields in the gulf with a light force, as a joint Anglo-Saxon plan had it in the 1970s, and as someone, probably Kissinger (Grey Anderson tells me that Edward Luttwak admitted to having authored the article in 2004) wrote about anonymously in Harper’s at the time of the Arab embargo. Since all of Saudi oil sits under a zone of Shiite predominance, the political problem can be solved by working with Iran just as the United States has to already in the arc of weakness that stretches from the gulf to the Levant. The micro-oil monarchies of the gulf are already under US protection. They would have to move closer. We cannot allow the Saudis to mediate between the United States and the Trucial States.

12QTR-Gulf-Oilfields-Labelled-m

But knowing that the United States can secure the oil fields without putting many boots on the ground is an insurance policy, not the proposed strategy. The Policy Tensor has been suggesting for a long time that the United States ought to follow a more even handed policy in the gulf. Indeed, it would not hurt to ditch Saudi Arabia for Iran. To put it bluntly, Iran dwarfs the gulf region in warmaking potential. The United States shot itself in the foot by destroying the garrison state built by Saddam. The result of US debacle is that Iran is now the most influential power in southwest Asia. Iran understands that the US is capable and willing to confront Iranian foreign policy in the gulf region and in southwest Asia as a whole. But it is also true that the United States has little choice but to work with Iran on regional questions. I think it is obvious that rolling back Iranian influence in Syria and Iraq is a fool’s errand.Iran is a natural ally of the West in the fight against salafi jihadism. The US needs a working relationship with Iran; better still, would be a genuine partnership with Iran.

I think there has been increasing cooperation in US-Iranian relations, each has gained an increased appreciation of the other’s strength by fighting side-by-side against Isis. What needs to be recognized now is the congruence of interests between Iran and the West. Because Iran is the potential region hegemon of gulf region, it is best to have good working relations. It makes for stable relations to have potential regional hegemons invested in the status quo.  As Huntington observed, the world is uni-multipolar. The United States is the only state in the world that can project power system wide. But geopolitics is regional. In order to run the different regions of the world, at the minimum, the United States has to reach an understanding with China in east Asia, India in south Asia, Iran in southwest Asia, Russia across Eurasia, and Germany in Europe. This is particularly true under conditions of mutually-assured destruction.

Arabian_Plate_showing_general_tectonic_and_structural_features,_Infracambrian_rift_salt_basins,_and_oil_and_gas_fields_of_Central_Arabia_and_North_Gulf_area_(usgs.gov)

What requires particular attention is gulf terror finance. Whether or not we contain Saudi Arabia more generally, terror finance from the gulf has to stop. Frankly, this requires eyeballs and interdiction by law enforcement in international financial flows from and to the gulf. Congress should fund this right quick and subject enforcement to oversight. But the real problem with Saudi Arabia is not restricted to terror finance even in the narrow sense of the war on terror. For Saudi Arabia is the world capital of salafism. Thousands of little religious schools run by the Saudis dot the Old World from Kosovo to Indonesia, where every attendant is at risk of recruitment by salafi jihadists. These schools are the breeding ground in which salafi jihadism grows. More generally, the propagation of salafism is the principal driver of jihadism. If we are serious about the fight against salafi jihadism, we must arm-twist the Saudis to roll-back their global network of salafi madrasas.

These are all matters of elementary security policy for the United States. But should Saudi Arabia be contained? What exactly would containment entail? Sanctions? Air strikes? I do not believe any of that is required. A simple threat of US abandonment would be enough to concentrate minds in the Kingdom. For if abandoned by the United States, the Kingdom would be faced with a vastly stronger power across the gulf without any security solutions. My proposal is not to jump to containment yet.

If the West were to act jointly, it would inform the Saudi authorities—once the intelligence is verified—that MBS has to go. He would be persona non grata. If the Saudis want to hold on to him even though there are thousands of princes waiting in line for the throne, it will be awkward for a while. But the West could very well stand firm on this. MBS just cut it too close to the bone.

Whether or not the Saudis ought to be subjected to sanctions depends on whether or not they are willing to cooperate in a major reorientation of Saudi policy (on terror finance, the madarsa network, and MBS). It would be best if the Saudis marginalized MBS without US intrigue. Although even intrigue would be better than having to deal with Saudi Arabia without talking to its de facto leader for whatever time it takes for the Saudis to come around.

Here’s how the unipolar world works. If there is a Nash equilibrium in international politics that the United States lays down and insists on, eg Washington Naval Conference of 1922, then a stable order can be secured. But this does not mean that Europe does not have a say. Indeed, the Europeans could unilaterally contain MBS. By declaring him persona non grata and opposing this administration on gulf policy, Europe can prepare the ground for when adults are back in Washington. This is already underway in the sense that the Europeans are working with the Iranians to pushback against the US reneging on the nuclear deal. Merkel would be wise to take this opportunity to extend the pushback to MBS.

When adults are back in Washington, one could move ahead in leaning harder on Saudi Arabia. But two things must be recognized. This is not just about Khashoggi and not just about MBS. This is above all an opportunity to reconsider Western gulf policy tout court. Why is the West containing Iran and arming the Saudis when core Western interests are closer to the former than the latter?

 

Standard
Markets

Musings on the Microstructure of the Market for Risk

margin-call

Margin Call (2011)

In closing the previous dispatch I offered that we may be missing a theoretical piece of the puzzle. Here I offer some musings on what sort of structure I think we need to get an even better handle on asset prices.

My understanding of the microstructure of the dealer ecosystem suggests to me that we have three kinds of market players in the market for risk: sell-side, buy-side, and noise traders. US securities broker-dealers on the sell-side make markets by trading at quoted prices. They also provide funding for the trades which consumes balance sheet capacity (the risk-bearing capacity of the sell-side relative to the scale of the buy-side which I have argued is the right pricing kernel in intermediary asset pricing). Noise traders are needed to close the model. More on them later.

Balance sheet capacity is a joint function of the relative ease of funding in the wholesale funding market on the one hand and the market clearing price of risk in the over-the-counter derivatives market on the other. When the price of risk is low (ie when asset valuations are high) more funding can be secured against the same collateral than when the price of risk is high (ie asset valuations are low). This generates a dangerous feedback loop between the market price of risk and the ease of funding.

To be sure, default-remote bonds serve as collateral in the rapidly spinning rehypothecation flywheel because the stability of the flywheel requires the absence of default risk. The proximate cause of the GFC was the fateful introduction of private-label RMBS into the flywheel. And it was the great sucking sound of the wholesale funding market that generated the housing finance boom. Once debt burdens triggered a massive wave of defaults and credit risk reached the flywheel it tottered and shrank, but continued to spin rapidly in its shrunken state on public collateral. But the crunch of the wholesale funding market generated a massive seizure in the machine of global credit creation, sending a massive shockwave that propagated worldwide. Only those with autonomous financial systems insulated by thick regulatory firewalls and those too remote to have been penetrated by global finance managed to come out in one piece.

In the aftermath of the GFC, an intrusive enforcement regime of limits on bank leverage, balance sheet surveillance, risk-assessment, and other regs have reduced the elasticity of dealer balance sheets. The sharply reduced risk-bearing capacity of the system is reflected in breakdown of the iron law of covered interest-rate parity, volatility spikes, and the risk on-risk off behavior of asset prices. Due to the upper bound on the leverage of global banks, the ease of funding has become a function of US monetary policy with the result that the strength of the dollar has emerged as a barometer of the price of balance sheets. Indeed, the strength of the dollar is now priced in the cross-section of US stock returns.

With the dealers pinned down, fluctuations in the market price of risk can be expected to driven by developments on the buy side. Investment strategies of large asset managers are variations on a small number of themes. Big institutional investors like pension funds and insurance companies (‘real money investors’ in the finance jargon) are bound by regulation and governed by similar investment philosophies to maintain asset allocations in certain definite proportions, which requires periodic and tactical rebalancing of their portfolios. When strategists speak of rotation in and out of asset classes, it is these real money investors that they usually have in mind. Also on the buy side are less constrained hedge funds who make up for their smaller size ($3 trillion AUM in the aggregate) by their tactical agility and willingness to make lots of leveraged bets funded by the dealers. Somewhat between the two are leveraged bond portfolios like Pimco who are interested in holding positions with ‘equity like returns with bond like volatility’ (Bill Gross:”Holy Cow Batman, these bonds can outperform stocks!“). That’s your buyside.

Then we have the noise traders. We can think of them as low information small retail investors, or plainly speaking, the small fry whose herd behavior is driven by sentiment. They kick asset prices away from fundamentals by randomly bidding asset prices too far up or down, thereby generating positive risk premia that are then harvested by the big fish. More generally, the game is subtly rigged towards the house by structural advantages of the dealers. In particular, privileged access to order flow information puts dealers is a position of tactical advantage. Apart from trading for the house, traders at dealer firms share order flow information (and therefore the information premium) with their networks on the buy side in exchange for a larger volume of trades with their attendant commissions. Moreover, since exposure to fluctuations in balance sheet capacity comes with a juicy risk premium, dealers and their counterparties in the market for risk enjoy higher risk-adjusted returns than the small fry even without exploiting order flow information. Furthermore, traders on the sell-side are not beyond generating handsome profits by pumping asset prices up and down or otherwise loading the dice. Although the real scandal is what is perfectly legal.

 

Standard
Markets

Stock Market Fluctuations Are Driven by Investor Herd Behavior

FT AlphaVille linked to an interesting blog post by Nick Maggiulli on Dollars and Data that examined the long-run stock return predictability in terms of equity allocations. Nick shows that high allocations predict lower ten-year returns. Here’s a replication of the main result.

tenyear

The result must be taken with a pinch of salt. Is it a feature or a bug? The cause for concern is that overlapping regressions generate spurious correlations. There is good reason to be skeptical of the extremely high coefficient estimate (r=-0.897, p<0.001). It likely reflects the medium-term cycle in Equity Allocation. (We use the same metric as Nick and in the original blog post at PhilosophicalEconomics.) Econometrically, regression estimates rely on the assumption that the series is stationary (no detectable temporal patterns like trends and cycles) which is manifestly violated here. See next figure.

EQA.png

What is required for kosher statistical inference is to transform the series so that it is at least roughly stationary. The best way to do that is to difference the series. Here we look at changes in the natural logarithm (ie, compounded rate of return) of the SP500 Index and Equity Allocation. The two series are manifestly stationary and appear to be strongly contemporaneously correlated.

ReturnPredictability.png

Indeed, contemporaneous percentage changes in Equity Allocation strongly predict quarterly returns on the SP500. Our gradient estimate (b=1.25, t-Stat=30.7) implies that 1 percent higher allocation to equities predicts a 1.25 percent quarterly return on the SP500 over and above the unconditional mean of 1.79 percent per quarter. Equity Allocation explains 78 percent of the variation in stock market returns. See next figure.

Predictable.png

The empirical evidence is rather consistent with the idea that fluctuations in the stock market reflect investor herd behavior. Specifically, the stock market goes up when investors rebalance to equities and goes down when investors rotate out of equities to bonds and cash. This is not only an important amplifier of dealer risk appetite and monetary policy shocks but also an important source of fluctuations in its own right. So stocks are getting culled across the board as we speak precisely due to investor rebalancing prompted by higher yields. (In turn, higher yields reflect either the expectation that the Fed will hike faster, a higher term risk premium, or both. The two can be disentangled using the ACM term-structure model as I illustrated not too long ago. [P.S. It’s risk premium; although Matt Klein doesn’t seem to buy the ACM decomposition.)

Tying market fluctuations empirically to investor herd behavior goes some way towards explaining the excess volatility of the stock market that has long puzzled economists. My wager is that stock markets fluctuate dramatically more than reassessments of underlying fundamentals could possibly warrant because of fluctuations driven by investor rebalancing.

The question is whether this is due to the herd behavior of small investors, or whether it is due to the inadvertently-coordinated rebalancing among large asset managers because they face similar mandates. If the former, that leads us to questions of investor sentiment. If the latter, it leads us straight back to market structure. In particular, it draws our attention to the buy side. Instead of paying exclusive attention to dealers and wholesale funding markets, perhaps we should also interrogate the investor behavior of large asset managers as an independent source of fluctuations in the price of risk.

In either case, knowing that rebalancing investor herds drive stock market fluctuations is not very useful since data on equity allocation is only available at the end of the quarter. Or is it not? Can we not think of Equity Allocation (hence implicitly investor herd behavior) as a risk factor for pricing the cross-section of stock excess returns? Indeed we can. Turns out, percentage changes in Equity Allocation are priced in the cross-section of expected excess returns. We illustrate this with 100 Size-Value portfolios from Kenneth French’s library.

CSR.png

What we find is that instead of a linear pricing relationship whereby higher betas imply monotonically higher expected returns in excess of the risk-free rate, the relationship is quadratic. Portfolios whose equity allocation betas is moderately high outperform portfolios with extreme betas in both directions. So an easy way to make money is to hold portfolios that are, depending on your risk appetite, long or overweight moderate beta stocks, and short or underweight extreme beta stocks.

Note that stock portfolios that are more sensitive to tidal investor flows are generally more volatile. See next figure.

volatile.png

The big puzzle that thus emerges is why these frontier assets (stock portfolios that are highly sensitive to investor rebalancing) don’t sport high expected returns. For the fundamental insight of modern asset pricing is that risk premia (expected returns in excess of the risk-free rate) exist because investors require compensation to hold systematic risk (but not idiosyncratic risk since that can be easily diversified away). In other words, assets that pose a greater risk to investors’ balance sheets ought to sport higher returns. We have shown that the tidal effect of inadvertently-coordinated investor rebalancing is a significant and systematic risk factor for all investors. So why isn’t there a monotonic relationship between the sensitivity of portfolio returns to investor rebalancing and the risk premium embedded in the cross-section? Why is the price of risk quadratic and not linear in beta? Clearly, we are missing a theoretical piece of the puzzle.

 

Standard
Thinking

Wage Growth Predicts Productivity Growth

Tip of the hat to Ted Fertik for bringing Servaas Storm’s unpacking of total-factor-productivity growth (TFP) to my attention. Storm shows that TFP can be regarded mechanically as a weighted sum of the growth rates of labor and capital productivity, roughly in a 3:1 ratio in that order. TFP, of course, is a measure of our ignorance. It nudges us to look inside firms, ie the supply side. This leads down the path to situated communities of skilled practice, ie Crawford’s ‘ecologies of attention.’ But perhaps it is better to work directly with labor productivity, certainly if Storm is right. Some say that high wages incentivize firms to invest in labor-saving innovations thereby increasing labor productivity. This is certainly consistent with the standard microeconomics view of firm behavior in that they are expected to do whatever it takes to get a competitive advantage in the market. A straightforward implication of this hypothesis is that real wage growth ought to predict productivity growth. We’ll see what the evidence has to say about this presently. But let us first note the policy implications of the theory.

The fundamental challenge of contemporary Western political economy is how to restore economic dynamism. The first-best solution to the rise to China is for the West to maintain its technical and economic lead. Similarly, the first-best solution to political instability and the crisis of legitimacy is a revival in the underlying pace of economic growth. So far no one has offered a credible solution; Trump’s tariffs, nationalist socialism á la Streeck, restoration of high neoliberalism á la Macron, are all small bore. But if the hypothesis that a significant causal vector points from real wage growth to productivity growth holds, then a bold new Social Democratic solution to the fundamental challenge of Western political economy immediately becomes available.

What I have in mind is a new mandate for central bankers. To wit, Congress should mandate the Federal Reserve to maximize real median wage growth subject to monetary and labor market stability. Until now central banks have targeted labor market slack as understood in terms of employment and inflation. But the real price of labor (more precisely, productivity-adjusted real median wage) is also an excellent measure of labor market slack. The hypothesis implies that targeting productivity-adjusted real median wage growth could restore productivity growth; perhaps dramatically. My suggestion is consistent with social democracy’s concern with distributional questions as well as with standard central banking practice. So if the result holds, it’s very useful indeed.

We start of by checking that real wage growth predicts productivity growth in the United States. The correlation is large and significant (r=0.531, p<0.001). This is suggestive. Wages_productivity.png

In order to systematically investigate this question we interrogate the data from the International Labor Organization (ILO). The ILO provides estimates of real output per worker, unemployment rate, and the growth rate of real wages. We restrict our sample to N=30 industrial countries since wage growth has diverged so significantly between the slow-growing advanced economies and fast-growing developing countries. We estimate a number of linear models and collect our gradient estimates in Table 1.

We begin in the first column that reports estimates for the simple linear model that explains productivity growth by 1-year lagged real wage growth. In the second column, we introduce controls for a temporal trend and lagged productivity growth. This sharply reduces our estimate for the gradient suggesting that the estimate reported in column 1 was inflated due to autocorrelation. We introduce country-fixed effects (ie country dummies) in column 3, which modestly reduces our estimate of the gradient. Instead of country fixed-effects, in the fourth column, we control for the unemployment rate, which turns out to be significant and which very modestly increases our gradient estimate. In the last two columns we introduce random effects for country and year. What this means is that instead of dummies for each country and year which is equivalent to having fixed intercepts by country and year, we admit the possibility that the intercept for a given country and year is random.

Table 1. Linear mixed-effect model estimates.
Intercept Yes Yes Yes Yes No No
Trend No Yes Yes Yes Yes Yes
AR(1) No Yes Yes Yes Yes Yes
Country fixed-effect No No Yes No No Yes
Unemployment Rate (lagged) No No No Yes Yes Yes
Real wage growth (lagged) 0.233 0.136 0.104 0.140 0.108 0.100
standard error 0.037 0.042 0.046 0.042 0.037 0.044
Country random effect No No No No Yes Yes
Year random effect No No No No Yes Yes
Source: ILO. Estimates in bold are significant at the 5 percent level. Dependent variable is real output per worker at market exchange rates. The number of observations is 480. 

We note that the gradient for lagged real wage growth remains significant across our linear models even after controlling for a temporal trend, lagged term for the dependent variable, lagged unemployment rate, country fixed-effects, and random effects for country and year. We can thus be fairly confident that real wage growth predicts productivity growth across the industrial world. The next step would be to embed this in a macro model to interrogate the viability of real median wage growth targeting by central banks.

Standard
Geopolitics

The Arrow of Time in World Politics

Structures don’t live out there in the wild; they are explanatory schemas that live in the discourse and in men’s minds as mental maps. Temporal structure is made of diachronic patterns (roughly, time-variation) as opposed to synchronic patterns (roughly, cross-sectional variation). An example of the former would be the relative decline of Britain in 1895-1905. An instance of the latter would be the pattern of diplomacy during the July Crisis. What historians in the tradition of Braudel are interested is slow-moving temporal structure. Synchronic pattern are of concern to historians interested in the history of events, that Braudel characterized as mere ‘surface disturbances’ that ‘the tides of history carry on their strong backs.’

Koselleck says every concept has its own “internal temporal structure.” I think, let’s try this at home. Here I locate the variable that’s doing most of the work in explaining the diachronic pattern of international politics in some IR theories I find interesting, and find a representation as an internal temporal structure inherited from the logic of the theory. It turns out to be surprisingly useful in thinking about the Chinese question.

In the discourse of realist IR, the explanandum is the historical pattern of relations among great powers at the center of the world-system beginning in Europe c. 1494 or c. 1648 depending on the scholar. Waltz’ main achievement was to isolate the systemic security interaction of great powers as a separate level of analysis and successfully claim disciplinary autonomy for international relations within political science. He did that by importing the trick from microeconomics where the market interaction of firms had been isolated as an independent disciplinary domain of enquiry within economics. In neorealism, systemic security interaction between homogenous units differentiated solely by a scale parameter called power is posited as a theoretical model of an international system. Waltz identified the structure of an international system with the distribution of power among the units. In the familiar metaphor, great powers are considered to be like billiard balls differentiated only by size. Time has no place in Waltz’ admissible class of abstract international systems. The result, predictably, is ‘rigor mortis’ (Walt, 1999).

Waltz

Kenneth Waltz (1924-2013)

But Waltz was not done yet in painting himself into a corner. No sir, he proceeds to throw out almost all the information in the distribution of power, that he had identified as the structure of the system. Polarity, the number of great powers in the system, is, Waltz argued, an efficient explanation of the stability properties of the international system in that bipolar systems are stable whereas multipolar systems are unstable. The de facto structure in Waltz’ theory then is not what he had declared to be the structure of the international system, the distribution of power. It is not even polarity per se. In his main arguments about system stability, all the work is done by a Boolean variable. Systems are either bipolar or multipolar. Moreover, the record of great power relations from c. 1648, is a single sample path; according to Waltz only two system structures have ever existed. Waltz thus cornered himself into an explanatory impasse that proved impossible to escape, which would be impressive if getting stuck in degenerate paradigms were uncommon.

Waltz withdrew into a sullen silence as the bipolar world, whose stability he had explained with great authority, evaporated into thin air on account of the unanticipated capitulation of the weaker party. With great conviction, he then proceeded to pronounce the unipolar world to be so unstable as to be not worthy of the attention of a serious student of world politics. But the unipolar world simply refused to lay down and die. It was 24 years old and going strong when Waltz passed away in 2013. Poor fellow must have been painfully conscious of his lost wagers with world history.

In contrast to Waltz one-dimensional explanatory scheme or structure, Gilpin’s is two-dimensional. The distribution of power is projected onto a time-axis and allowed to evolve. This decisive discursive move reveals a cyclic, punctuated equilibrium-type pattern in world politics. Hegemonic wars punctuate long periods of system stability. These periods of system stability exhibit stable patterns of international politics that are reenacted over and over again, and tend to change very slowly. For example, the identity of the maritime hegemon (a natural monopoly); the identity of the dominant politico-military actors in different regions; the territorial order; the diplomatic rank ordering; national questions (eg, the Kurdish question); and, with apologies to WG Sebald, no doubt the cabinet of zombie curiosities: dead treaties, agreements, multilateral coordinating bodies, aid programs, and peace processes et cetera; they stand frozen in the instant they came to grief, waiting to be toppled over by strong winds and consigned to the storage rooms of museums and libraries. The reproduction of world order is founded on the absence of a reconsideration of the world question rather than just the military position of the dominant power. To be precise, the stability of world order rests on the fear of world war. It rests on the unwillingness of potential insurgents to reopen the world question.

gilpin

Robert Gilpin (1930-2018)

Gilpin’s motor of world history, the law of uneven growth, is a particular instance of the Second Law which states that the entropy of any system increases over time. For international systems, entropy is defined as the dispersion of power. Hegemonic war eliminates some great powers, weakens others, and strengthens only a few, so that when world time resets to zero at the close of the hegemonic war, power in the system is highly concentrated, ie entropy is low. The system deconcentrates over time in accordance with the Second Law. The expected dispersion of power in the system is thus maximal on the eve of a hegemonic war. The Second Law in the form of the law of uneven growth thus endows the system with a temporal structure. We define world time in terms of this temporal structure. More precisely, world time can be identified with the entropy of the international system.

Victorious powers forge a world order from scratch. But underneath the stable patterns of world order, pace Foucault, world time continues its relentless march. World time can be thought of as the clock of the rise and fall of the great powers. World time measures the erosion of the dominant power’s relative power and influence. World time enters world history at two levels of causation, at the level of the discourse and in extradiscoursive reality. Great power statesman are in effect trying to read world time when they try to ascertain the changing balance of world power. World time also enters history through brute facts about the relative distribution of war potential. The tick-tock echoes between the steel factories and the minds of statesmen. The motor of hegemonic war in Gilpin is identical to the one identified by Thucydides 2400 years ago. Under the tick-tock of world time, the declining hegemon discovers merit in the logic of preventive war. Perhaps it is best to reach a decision while time is still on one’s side. Knowing that the dominant power might succumb to the seduction of preventive war, the rising power too is alarmed by the tick-tock of world time. Rising powers know that they must accumulate power fast or they will be crushed.

In Gilpin’s theory, world time is cyclical. The tick-tock of world time announces the approach of the time of the great reckoning when the fates of great civilizations will once again be determined. World time is then quite literally set to zero, and starts afresh with the creation of a new world order forged by the victors from the smoking ruins of the hegemonic war. Until, of course, world time signals autumn. The fundamental problem with cyclical time is the temporality of world systems. Sequential world orders cannot be assumed to be independent. Hegemonic wars are fought precisely over the terms of the world order. World orders are forged in the shadow of hegemonic war from the ruins of the previous world order. They are often consciously constructed in light of the lessons of the failure of the previous attempt at world order. In any event, they are informed by what went before even if the old hegemon is eliminated from the roster of great powers. So the reset of world time is quite problematic from a historical point of view.

Ashley Tellis introduced the theory of hard realism in his doctoral dissertation, where we find a very different temporal structure. Tellis starts with the observation that the first-best solution to the threat posed by a great power rival is to eliminate it. Starting with a given roster of great powers, Tellis reasons that repeated struggles between the great powers would eliminate the weakest in each round thereby introducing a temporal structure in great power history that we can call system time. Put simply, the roster of great powers would feature a larger number of great powers earlier in system time compared to later. With each subsequent round the number of great powers would dwindle until it is reduced to a singleton.

tellis

Ashley Tellis (1961-)

System time here can thus be operationalized as the polarity of the system over the very long run. Whereas world time is cyclical, curling round like a Riemann surface, system time is linear but finite. It begins with the emergence of an international system and culminates in a unipolar world once all but a single power have been eliminated from the roster of great powers. The temporal pattern suggested by system time not only captures the deep temporal structure of Western history and the contemporary world-system, it is also consistent with the evidence of other international systems across time and space, as Kaufman, Little, and Wohlforth document.

While system time and world time are ahistorical but not teleological since they are derived from the logic of the theory. Both operate simultaneously under the surface of international relations, the first even more slowly than the second. Yet another temporal structure, a third arrow of history, is associated to the hockey-stick, that we may call exponential time. Put simply, the hockey-stick introduces a temporal logic of escalation in world politics. World struggles that take place later do so at levels of destruction and power potential an order of magnitude higher than the previous tournament. Exponential time is the temporal representation of the hockey-stick. It is nonlinear but bounded from above. What happens is that the destructive power eventually becomes so great that a world struggle spells the final and irrevocable coda of human history.

World time, system time, and exponential time exist in every instant of world politics. But they are not symmetrically situated. System time continues to unfold even as world time is reset with each hegemonic war. Even as system time increases linearly, exponential time increases exponentially. In the end exponential time catches up with both world time and system time. For when exponential time hits the upper bound, the iron logic of mutually-assured destruction frustrates the historical logic of both world time and system time by eliminating the very possibility of world war. For the dominant power can no longer eliminate the rising power through preventive war for fear of nuclear annihilation so the temporal logic of world time is frustrated. And since no first-rank power can be eliminated in a world struggle, the logic of system-time is frustrated as well.

Should the United States follow world time and launch a cold war against China? Should the US obey system time and eliminate China with a splendid first-strike while it still can? Or should the United States buy into the notion that the iron logic of mutually assured destruction has repealed the laws associated with world time and system time, and seek a modus vivendi with China in the secure knowledge that neither can be eliminated from first-rank? If we are guaranteed to end up in the impasse of strategic stalemate then why not skip the security competition and go straight to détente?

Standard
Thinking

Cognitive Test Scores Measure Net Nutritional Status

At the heart of the racialist imaginary is the notion of the natural hierarchy of the races. Not only are there discrete types of humans, racialism insists, they are differentially endowed. Turn of the century high racialism construed this racial hierarchy in terms of a racial essence. This racial essence was supposed to control men’s character, merit, behavioral propensities, and capacity for refinement and civilization. To be sure, racial essence was thought of as multidimensional. But no educated Westerner at the turn of the century would beg to disagree with the notion that the races could be put into a natural hierarchy.

What explains the hold of high racialism on the turn of the century Western imaginary? Some of it was obviously self-congratulation. But that can’t be the whole story. There were some pretty smart people in the transatlantic world at the turn of the century. Why did they all find high racialism so compelling? Because critical thinkers interested in a question sooner or later find themselves sifting through the scientific literature, part of what needs explanation is the consensus on scientific racialism. Put another way, we should ask why the best-informed of the day bought into high racialism.

Broadly speaking, I think there were three factors at play. First, in the settler colonies and metropoles of the early modern world, migrant populations from far away found themselves living cheek-by-jowl with others. This created a visual reality of discrete variation out of what were in fact smoothly-varying morphologies. What were geographic clines reflecting morphological adaptation to the macroclimate in the Old World appeared to be races in the New World. In effect, early modern population history created a visual reality that begged to be described as a world of discrete races.

Second, and more important, was the weight of the taxonomic understanding of natural history. The hold of the taxonomic paradigm was so strong that it seemed to be the only way to comprehend the bewildering human variation revealed by the collision of the continents. The existence of specific races and their place in the natural hierarchy may be questioned but that racial taxonomy was a useful way to understand human variation was simply taken for granted. Unbeknownst to the best-informed of the day, this was a very strong assumption to make about the world.

Third, and most important, was the sheer weight of the explanandum. What made racial taxonomy so compelling was what it was mobilized to explain: the astonishing scale of global polarization. As Westerners contemplated the human condition at the turn of the century, the dominant fact that cried out for explanation was the highly uneven distribution of wealth and power on earth. It did really look like fate had thrust the responsibility of the world on Anglo-Saxon shoulders; that Europe and its offshoots were vastly more advanced, civilized and powerful that the rest of the world; that Oriental or Russian armies simply couldn’t put up a fight with a European great power; that six thousand Englishmen could rule over hundreds of millions of Indians without fear of getting their throats cut. The most compelling explanation was the most straightforward one. To the sharpest knives in the turn of the century drawer, what explained the polarization of the world was the natural hierarchy of the races.

It is this that distinguishes racialism from racism. The former is fundamentally an explanation of global polarization; the latter is a politico-ethical stance on the social and global order. In principle, it is possible to racialist without being racist but not vice-versa. In practice, however, few racialists could sustain politico-ethical neutrality on race relations.

During the nineteenth century, the discourse of Anglo-Saxon self-congratulation morphed from the traditional mode that saw Anglo-Saxons as blessed by Providence to the notion that they were biologically superior to all other races on earth. Driven by settler colonial racialism, the vision of the colorblind empire was definitely shelved by London in favor of a global racial order after the turn of the century. Things came to a head on the South Africa question where the settlers demanded apartheid. London gave in after a brief struggle. The resolution of the South Africa question in 1906 was a key moment in the articulation of the global color line.

The first real pushback against high racialism came from scholars at Columbia in the 1930s. Franz Boas and his students, most prominently Ruth Benedict, led the charge. They punctured the unchallenged monopoly of high racialism but the larger edifice survived into the Second World War. The discourse of high racialism collided with reality at the hinge of the twentieth century. As Operation Barbarossa began, Western statesman and intelligence agencies without exception expected the Soviet Union to collapse under the German onslaught in a matter of weeks. If France capitulated in six weeks, how could the Slav be expected to stand up to the Teuton for much longer? That the Slav could defeat the Teuton was practically unthinkable in the high racialist imaginary. Not only did the Soviet Union not collapse, it went on to single-handedly crush what was regarded as the greatest army the world had ever seen. This was because Stalinism proved to be a superior machine civilization than Hitlerism where it mattered—where it has always mattered to the West—on the battlefield. The Slav could indeed defeat the Teuton. The evidence from the battlefield required an unthinkable revision of the natural hierarchy of the races, directly antithetical to the core of the racialist imaginary, ie Germanic racial supremacism.

It would seem that Auschwitz, that great trauma of modernity, more than anything else pushed racialism beyond the pale. If so, it took surprisingly long. It was not until the sixties that racial taxonomy became unacceptable in the scientific discourse. Recall that it was in 1962 that Coon was booed and jeered at the annual meeting of the American Association of Physical Anthropologists, an association that had until quite recently been the real home of American scientific racialism. The anti-systemic turn of the sixties opened the floodgates to radical critiques of the mid-century social order and the attendant conceptual baggage, including a still-pervasive racialism.

It took decades before racialism was pushed beyond the boundaries of acceptable discourse. But by the end of the century a definite discipline came to be exercised in Western public spheres. In the Ivory Tower, a consensus had emerged that races did not reflect biological reality but were rather social constructs with all-too-often violent consequences. Whatever systematic differences that did exist between populations were considered to be trivial and/or irrelevant to understanding the social order. This consensus continues to hold the center even though it is fraying at the margins.

In fact, one can date the rise of neoracialism quite precisely. This was the publication of Murray and Herrnstein’s The Bell Curve in 1994. Although most of the book examined intelligence test scores exclusively for non-Hispanic White Americans and explored the implications of relentless cognitive sorting on the social order, critics jumped on the single chapter that replicated known results on racial differences in IQ. (Responding to the hullabaloo the American Psychological Association came out with a factbook on intelligence that was largely consistent with the main empirical claims of the book.) Herrnstein passed away around the time when the book came out. But, ever since then, Murray has been hounded by protestors every time he makes a public appearance. At Middlebury College last year, a mob attacked Murray and his interviewer, Professor Allison Stanger, who suffered a concussion after someone grabbed her hair and twisted her neck. I think we must see this aggressive policing of the Overton Window (the boundary of the acceptable discourse) as the defining condition of what I call neoracialism. It is above all a counter-discourse. Those espousing these ideas feel themselves to be under siege; as indeed they are.

Neoracialism retains the taxonomic paradigm of high racialism but it is not simply the reemergence of high racialism. For neoracialism is tied to two hegemonic ideas of the present that were nonexistent back when high racialism had the field to itself.

The first of these is the fetishization of IQ. The test score is not simply seen as a predictor of academic performance, for which there is ample evidence. (For the evidence from the international cross-section see Figure 1). It is seen much more expansively as a test of overall merit; as if humans were motor-engines and the tests were measuring horsepower. The fetish is near-universal in Western society; right up there with salary, the size of the house, and financial net worth. It is an impoverished view of man, sidelining arguably more important aspects of the human character: passion, curiosity, compassion, integrity, honesty, fair-mindedness, civility, and so on.

fig1_test_score_EA.png

Figure 2. Source: Lynn and Meisenberg (2017).

The second hegemonic idea is the blind acceptance of the reductionist paradigm. Basically, behavior is reduced to biology and biology to genetics. Both are dangerous fallacies. The first reduction is laughable in light of what may be called the first fundamental theorem of paleoanthropology: What defines modern humans is behavioral plasticity, versatility, and dynamism untethered to human biology. In other words, modern humans are modern precisely in as much as their behavior is not predictable by biology.

The reduction of biology to genetics is equally nonsensical in light of what may be called the first fundamental theorem of epigenetics: Phenotypic variation cannot be reduced to genetics, and indeed, even the environment. For even after controlling for both there is substantial biological variation left unexplained. Not only is there substantial phenotypic variation among monozygotic twins (those who have identical genomes), even genetically-cloned microbes cultured in identical environments display significant phenotypic variation. The only way to make sense of this is to posit that subtle stochastic factors perturb the expression of the blueprint contained in DNA even under identical environmental conditions. This makes mincemeat out of the already philosophically-tenuous paradigm of reductionism.

So neoracialism is a counter-discourse in contemporary history that is rigidly in the grip of the three fallacies: that racial taxonomy gives us a good handle on human variation, that IQ is the master variable of modern society and the prime metric of social worth, and that DNA is the controlling code of the human lifeworld à la Dawkins. Because the last two are much more broadly shared across Western society, including much of the Left, the critique of neoracialism has been relatively ineffective.

But beyond the rigidities of the contemporary discourse, there is a bigger reason for the rise of neoracialism. Simply put, racialism was marginalized without replacement. The explanatory work that racialism was doing in making sense of the world was left undone. No alternate compelling explanation for global polarization was offered. Instead, under the banner of Modernization, population differences were simply assumed to be temporary and expected to vanish in short order under the onslaught of Progress. Indeed, even discussion of global polarization became vaguely racist and therefore unacceptable in polite company. With the nearly-uniform failure of the mid-century dream of Modernization, the door was thus left ajar for the resurrection of essentialist racial taxonomy to do the same explanatory work it had always performed. It is the absence of a scientific consensus on a broad explanatory frame for human polarization that is the key permissive condition for neoracialism.

A scientific consensus more powerful that neoracialism, based on thermoregulatory imperatives, is emerging that ties systematic morphological variation between contemporary populations to the Pleistocene paleoclimate on the one hand, and contemporary everyday living standards (nutrition, disease burdens, thermal burdens) on the other. Disentangling the two has been my obsession for a while. I finally found what those in the know already knew. Basic parameters of the human skeleton are adapted to the paleoclimate.

At the same time as these developments in paleoanthropology and economic history, recent progress in ancient-DNA research has highlighted the importance of population history. I tried to bring the paleoanthropology and population history literature into conversation by showing how population history explains European skeletal morphology over the past thirty thousand years. My argument is based on known facts about the paleoclimate during the Late Pleistocene and known facts about population history. The paleoclimate is the structure and population history is the dynamic variable. It is that which allows us to predict dynamics in Late Pleistocene body size variables. We were of course forced into this explanatory strategy by the brute fact that population history and the paleoclimate are the main explanatory variables available for the Pleistocene.

I do not mean to imply that technology and organization did not causally affect human morphology, eg we have ample evidence of bilateral asymmetry in arm length as an adaptation to the spear-thrower. But all such adaptations are superstructure over the basic structure of human skeleton that reflects morphological adaptation to the paleoclimate of the Pleistocene that began 2.6 Ma. In Eurasia in particular, it reflects adaptation to the macroclimate after the dispersal of Anatomically Modern Humans from Africa 130-50 Ka. Because the Late Pleistocene, 130-10 Ka, is so long compared to the length of time since the Secondary Products Revolution 5 Ka, and especially the Secondary Industrial Revolution 0.1 Ka, and despite the possibility that evolution may have accelerated in the historical era, the Late Pleistocene dominates the slowest-moving variables of the human skeleton. Indeed, I have shown that pelvic bone width and femur head diameter reflect adaptation to the paleoclimate of the region where the population spent the Late Pleistocene.


I feel that economic historians have been barking up the wrong tree. The basic problem with almost all narratives of the Great Divergence (as the historians frame it) or the exit from the Malthusian Trap (as the economists would have it) is that the British Industrial Revolution, 1760-1830, does not revolutionize everyday living standards in England. This is easy to demonstrate empirically whether one relies on per capita income, stature, or life expectancy. In general, the economic, anthropometric, and actuarial data is consistent with a very late exit from the Malthusian world; the hockey stick is a story of the 20th century.

The evidence is rather consistent with the hypothesis that the extraordinary polarization of living standards across the globe is a function of the differential spread of the secondary industrial revolution, 1870-1970, (senso stricto: the generalized application of powered machinery to work on farms, factory floors, construction sites, shipping, and so on; senso lato: the application of science and technology to the general problem of production and reproduction). So proximately, what needs to be explained is the spread of the secondary industrial revolution. Specifically, the main explanandum is this: Why is there a significant gradient of output per worker (and hence per capita income) along latitude? Why can’t tropical nations simply import the machinery necessary to increase their productivity to within the ballpark of temperate industrial nations and thereby corner the bulk of global production? Despite the wage bonus and the ‘second unbundling’, global production has failed to rebalance to the tropics. Why??

I proposed a simple framework that tied output per worker to the rate of intensity of the work performed on the same machine; and the rate of intensity of work performed to the thermal environment of the farm, factory floor, construction site, dockyard and so on—in accordance with the human thermal balance equation. This was not very original—the claim is consistent with known results in the physiological and ergonomics literature. What I am saying in effect is that the difference is not so much biology, education, or culture. To put it bluntly, educated and disciplined male, White, Anglo-Saxon workers from the MidWest would not be able to sustain the intensity of work performed on the same machine at the same rate in Bangladesh as in Illinois. Like the Bangladeshis, they would have to take frequent breaks and work less so as not to overheat. This mechanically translates into lower productivity and hence lower per capita income.

I appreciate the increasing attention to thermal burdens in light of global warming. Recently, Upshot had a fascinating report tying gun violence outdoors but not indoors (!!!) to temperature spikes. Earlier, in an extraordinary study, Harvard’s Goodman tied students’ test scores to the thermal burden on the day of the test. That goes some way towards explaining the gradient of latitude in the international cross-section of test scores—an uncomfortable empirical fact well outside the Overton Window that neoracialists insistently point to as empirical “proof” of the relevance of racial taxonomy to understanding the global order. We’ll return to the empirical evidence from the correlates of test scores presently.


Following in the footsteps of Murray and Herrnstein, Richard Lynn published The Global Bell Curve in 2008. It went to the heart of the matter. Here, global polarization is tied precisely to test scores. Some populations are rich and powerful, and others are poor and weak because, we are told, the former are cognitively more endowed than the latter. That’s the master narrative offered here. One finds different versions in other neoracialist accounts. Rushton claimed racial differences in cranial capacity, that we debunked. Wade finds racial taxonomy more persuasive than the geographic clines favored by geneticists. In what he calls his more speculative chapters, Wade does the full double reduction: differences in behavioral patterns are mobilized to explain the world order and DNA is mobilized to explain behavioral patterns. Gene-culture coevolution and other speculations are thrown around to explain global polarization.

The heart of neoracialism isn’t, What’s the controlling variable for human variation per se. The question at the heart of neoracialism is, What’s the controlling variable for human variation that is relevant to the social order, the global order, the manifest and multiple hierarchies of our lifeworld? A presumed innate hierarchy of the races in general ability is doing all the work in neoracialism for it is mobilized to explain all of global polarization in one fell swoop. Neoracialism looks for a master variable that explains the presumed rank ordering of human societies. Whence the fetishization of IQ (thought to be ultimately controlled by DNA, although all efforts to explain test scores by DNA have been frustrated). In the minds of neoracialists and those who are tempted to join them, it is test scores that explain the cross-section of per capita income. A lot is thus at stake in that equation. That’s the context of Lynn’s The Global Bell Curve.

The rigidities of the liberal discourse have meant that a very fruitful way of thinking about systematic variation in the test scores of human populations have been overlooked. We argue that test scores contain information on everyday living standards. Put simply, they are a substitute for per capita income, stature, or life expectancy. They measure net nutritional status which is a function of nutritional intake and expenditure on thermoregulation, work, and fighting disease. (Net nutritional status is just jargon for the vicious feedback loop between nutrition and disease; they must be considered jointly.) We show this by showing that the best predictors of test scores are the Infant Mortality Rate and animal protein (dairy, eggs and meat) intake. More generally, we show that all metrics of net nutritional status are strong predictors of test scores.

While it may be conceivable that variation in cognitive ability explains variation in per capita income, given the universal availability of modern medicine, the claim that variation in cognitive ability explains variation in the Infant Mortality Rate is really tenuous. Given the empirical correlation we document below, it is much more plausible that tropical disease burdens suppress test scores than vice-versa. In other words, it makes no sense to infer that the racial hierarchy supposedly revealed by test scores explains disease burdens, but it make ample sense to infer that disease burdens explain test scores. This is the crucial wedge of our intervention.


We begin our empirical analysis by noting the Heliocentric pattern of test scores. Table 1 displays Spearman’s rank correlation coefficients for test scores on the one hand and absolute latitude and Effective Temperatures on the other. Spearman’s coefficient is a distribution-free, robust estimator of the population correlation coefficient (r) and more powerful than Pearson’s coefficient. Effective Temperature is computed from maximum and minimum monthly averages via the formula in Binford (2001): ET=(18*max-10*min)./(max-min+8), where the max and min temperatures are expressed in Celsius. ET is meant to capture the basic thermal parameter of the macroclimate.

Table 1. Heliocentric polarization in test scores.
Spearman’s rank correlation coefficients.
N=86 IQ test score (measured) IQ test score (estimated) Educational Attainment
Absolute latitude 0.65 0.65 0.63
Effective Temperature -0.64 -0.62 -0.59
Source: Lynn and Meisenberg (2017), Trading Economics (2018), Binford (2001), author’s computations. Estimates in bold are significant at the 1 percent level. 

Note that Effective Temperature is just a function of absolute latitude (r=-0.949, p<0.001). Our estimate of the correlation coefficient between absolute latitude and measured IQ test scores is large and significant (r=0.654, p<0.001), implying a gradient so large that moving 10 degrees away from the equator increases expected test scores by 4 points. Effective Temperature is also a strong correlate of measured IQ (r=-0.639, p<0.001), implying that an increase in Effective Temperature by just 5 degrees reduces expected test scores by 11 points. The fundamental question for psychometry then is, What explains these gradients?

Answering this question requires pinning down the proximate causal structure of test scores. We argue that test scores measure net nutritional status. Table 2 marshals the evidence. We see that all measures of net nutritional status (Infant Mortality Rate, animal protein intake per capita, life expectancy, stature, protein intake per capita, and calorie intake per capita) are strong correlates of test scores. The strongest is Infant Mortality Rate (r=-0.859, p<0.001) which captures the vicious feedback-loop between nutrition and disease burdens. By itself, Infant Mortality Rate explains three-fourths of the variation in measured test scores reported by Lynn and Meisenberg (2017). The results are robust to using estimated test scores or Educational Attainment instead of measured test scores.

Table 2. Pairwise correlates of test scores.
Spearman’s rank correlation coefficients.
IQ test score (measured) IQ test score (estimated) Educational Attainment
Infant Mortality Rate (log) -0.86 -0.85 -0.84
Animal protein intake per capita 0.80 0.76 0.76
Life expectancy 0.76 0.68 0.70
Stature 0.74 0.74 0.73
Per capita income (log) 0.68 0.59 0.74
Protein intake per capita 0.64 0.82 0.63
Calorie intake per capita 0.54 0.67 0.57
Source: Lynn and Meisenberg (2017), World Bank (2014), Trading Economics (2018), FAO (2018), author’s computations. Estimates in bold are significant at the 1 percent level. 
fig2_IMR_test_score.png

Figure 2. Infant mortality rate (World Bank, 2014) predicts test scores (Lynn and Meisenberg, 2017).

Our estimate for the correlation between animal protein intake per capita and measured test scores is also extremely large (r=0.802, p<0.001). Astonishingly, each additional gram of animal protein intake per capita increases expected test scores by 0.4 points. By itself, animal protein intake explains two-thirds of the international variation in mean test scores. Although not as strong, calorie intake per capita (r=0.541, p<0.001) and protein intake per capita (r=0.649, p<0.001) are also strong correlates of test scores. The pattern suggests that the lower test scores of poor countries reflect lack of access to high-quality foods like eggs, dairy and meat.

ap_controls_test_score.png

Figure 3. Animal protein (FAO, 2018) predicts test scores (Lynn and Meisenberg, 2017).

The main import of the extremely high correlations between test scores on the one hand and Infant Mortality Rate (r=-0.859, p<0.001) and per capita protein intake (r=0.802, p<0.001) on the other is clear: Health insults control investment in cognitive ability. Energy and nutrition that could be channeled towards cognitive ability have to be diverted to dealing with health insults arising jointly from malnutrition and disease.

We have checked that stature is much more plastic than pelvic bone width. And we have shown that the divergence in stature is a story of the 20th century, ie it carries information of modern polarization. The strong correlation between test scores and stature (r=0.760, p<0.001) therefore suggests that test scores also contain information on modern polarization. The strength of the correlation between test scores and life expectancy (r=0.761, p<0.001) reinforces this interpretation.

stature_test_score.png

Source: Lynn and Meisenberg (2017), Clio Infra (2018).

What Table 2 shows is that systematic variation in test scores between populations is a function of systematic variation in net nutritional status. The correlations make no sense if neoracialism is approximately correct, but they make ample sense if test scores reflect net nutritional status. If a country has low test scores you can be somewhat confident that it is poor (R^2=44%) but you can be much more confident that it faces malnutrition (R^2=64%) and especially high disease burdens (R^2=74%). This implies that the causal vector points the other way, from polarization to test scores. Far from explaining global polarization as in the high racialist imaginary, test scores are explained by inequalities in everyday living standards. The evidence from psychometry adds to other evidence of global polarization from economics, anthropometry, and demography that continues to demand explanation.

We have suggested that the current radio silence over systematic variation in test scores fosters neoracialism. We must break this silence and talk openly and honestly about such questions lest we leave the interpretation of these patterns to neoracialists. More generally, an effective rebuttal of neoracialism requires a more compelling explanation of global polarization. Given the discursive hegemony of science, I want to persuade progressives that this requires taking science as the point of departure. My wager is that a much more compelling picture is indeed emerging from the science itself that explains global polarization, and more generally, systematic variation in human morphology and performance, not in terms of racial taxonomy but rather in terms of the Heliocentric geometry of our lifeworld that structures thermoregulatory, metabolic, and epidemiological imperatives faced by situated populations.

Standard