Thinking

The Tyranny of the Isotherms: Evidence from High Resolution Data

One of my great frustrations has been that I have yet to learn cartographic software. But having chanced upon Yale’s G-Econ high resolution data, I have suddenly found myself in a position to create virtual maps. So that’s the occasion for revisiting an old obsession. Basically, I have been trying to get a handle on global polarization. Why is wealth and power so goddamn concentrated on our planet?

The short answer is the tyranny of the isotherms. I have identified at least three causal channels from the Heliocentric geometry of our lifeworld to global polarization. First, thermal burdens directly suppress productivity since work intensity cannot be sustained on the same machine at northern rates in southern climes due to the human thermal balance equation. You must take frequent breaks to prevent overheating. This means that low latitude nations find it hard to compete for global production against high latitude countries. Biogeography thus directly conditions the global division of labor.

Second, higher thermal burdens closer to the equator come with higher disease burdens. This is simply because germs, parasites, and disease vectors proliferate in the tropical and subtropical zones. (More generally, species diversity and overall biomass is much greater in the tropics.) The attendant health insults indirectly suppress productivity by sapping the strength of populations situated closer to the equator. As Gamble put it in context of the Out-of-Africa dispersals, the low latitudes were good areas to escape from.

dengue

Third, since development is strongly path dependent, ecogeographic factors such as Binford’s storage threshold (food cannot be stored when ET is above 15.25ºC; ie within the two darkest bands in the contour map below), the length of the growing season (also captured by ET), and net above-ground productivity (not sufficient for survival on vegetal sources when ET is below 12.75 ºC; ie within the two lightest belts in the map below) constraint the long-term trajectories of societies in a manner that is decidedly not neutral. In general, the incentives and possibilities for technical innovation and capital accumulation senso lato are higher at high latitudes where the length of the growing season is limited (one must save for the winter or one will literally starve) and net above-ground productivity is low (every calorie requires more work to secure thereby incentivizing technical innovation). At low latitudes on the other hand, above-ground productivity is high, the growing season lasts through the year (there are effectively no seasons when ET is above 18ºC; the darkest zone in the map below), and food storage is impossible anyway, so there is little incentive to save for lean times or invest in labor saving innovations. Binford showed that across the ethnographic present, the tool-kits of hunter-gatherer increase in complexity as ET falls.

highreset

Binford’s thresholds for ET. Darkest band: ET>18ºC, zero days of frost; second band: 18>ET>15.25 storage threshold; third band: 15.25>ET>14 Bailey’s threshold for cool and warm climes; fourth band: 14>ET>12.75, exclusive reliance on vegetal resources impossible; penultimate band: 12.75>ET>10, frost all year; lightest band: ET<10ºC, polar climate.

HighResPCGDP.png

So what does empirical evidence have to say? The G-econ dataset has income, population and environmental data at the resolution of latitude and longitude grids. The above map displays per capita income at purchasing power parity in 2005. We have N=18,683 observations in the sample. We begin by estimating Spearman’s rank correlation coefficient between ET and per capita income. Our is estimate is extremely high (r=-0.6103, p=0) and so significant that the p-value is indistinguishable from zero within machine precision. That ET alone explains 37 percent of the variation in per capita income at this fine-scale resolution is nothing short of mind-boggling.

We had previously tried in vain to test the Heliocentric theory with within country data. Specifically, it is a prediction of the theory that nations that span isotherms should be regionally polarized by ET. We can now test that prediction. Our estimate for Italy (r=-0.7642, p=0) is even higher than the global sample and again so significant that p=0 with machine precision. For Chile, our estimate is also very significant (r=-0.3946, p<0.0001) but not as strong as that for the United States (r=-0.5218, p<0.0001) or Argentina (r=-0.6098, p<0.0001). More generally, we find that a statistically significant (p<0.05) relationship holds in 41 countries. We do however fail to find a significant correlation in China (r=0.06, p=0.9762).

To check for robustness we return to the global dataset and control for various factors. Recall our baseline estimate for the correlation between ET and per capita income (r=-0.6103, p=0). We begin by controlling for altitude. Our estimate for the partial correlation coefficient is marginally higher (r=-0.6360, p=0). Controlling for both annual precipitation and altitude, our estimate is marginally lower (r=-0.6009, p=0). Controlling also for distance from the ocean (which measures continentality—the climate is more temperate closer to the ocean than further inland) we obtain a very similar figure (r=-0.6291, p=0). Similarly, controlling for altitude, precipitation, distance from the ocean, and distance from a major river leaves our estimate essentially unchanged (r=-0.6298, p=0). Controlling further for area of the grid (the “squares” reduce in size away from the equator) marginally reduces our estimate (r=-0.5750, p=0). This is likely because the area of the grid is a function of latitude. We therefore drop area and control for roughness (how much the elevation varies within the grid) and keep our other controls. That yields an estimate virtually indistinguishable from our baseline (r=-0.6025, p=0).

The stability of the estimate under all these controls gives us good confidence than the Heliocentric theory is essentially right. Of course, the Heliocentric geometry of our lifeworld does not dictate the fate of societies. But it does structure it; quite powerfully so, as we have shown.

I know it feels vaguely racist to tie biogeography to global polarization. But the truth is that biogeography is not racist but rather an alternative to racialism as an explanatory schema for global polarization. Indeed, it is arguably less offensive than culturalist explanations. Britain has 66 million souls; India has 1,339 million. The latter has now been independent for 73 years. Yet India’s economy still smaller than Britain’s. Why?? Is it not more offensive to suggest that the reason is that the former is culturally retarded than that it faces heavier biogeographic burdens? Who is more in tension with Boas? Binfordian anthropology or post-processualism (whatever that is)?

Hegemonic Boasian antiracism has itself to blame for the rise of neoracialism. Put simply, racialism was marginalized without replacement. All the explanatory work that racialism was doing was simply left undone. No alternate explanation of global polarization was offered. It was simply assumed that convergence would obtain as nations Modernized. When that failed to obtain the door was thus left ajar for the resurrection of racialism to do the same work it had performed in the heyday of racial anthropology.

I believe in the agency of societal actors. Escape from want is definitely possible for all nations. But the problems of the low latitude nations cannot be solved by simply assuming them away. The catastrophic failure of the mid-century dream of Modernization stems precisely from that elementary mistake. Nations are situated communities. Uniform strategies imported from the northern nations, whether economic best practices or modernization theories, won’t solve their concrete problems. Indeed, it cannot be more obvious that unless low latitude nations find ways to mitigate the burdens imposed by the Heliocentric geometry of our lifeworld, they will not escape the tyranny of the isotherms.

Advertisements
Standard
Thinking

The Military Case Against Abandoning the Syrian Kurds

ypj-hassakah-678x381.jpg

Kurdish fighters in Syria. Source: Maryam Ashrafi.

The US military has been previously asked to perform military operations on the northern Euphrates against ISIS. A future order to carry out military operations may be a variant of the ISIS resurgent scenario. In any case, it is prudent for the US military to prepare for such contingencies. Contingency planning demands force planning. Biddle is right that successful military operations against a skilled adversary like ISIS (with many veterans of Saddam’s special forces) require both command of the air and ground-force skill. In particular, untrained local auxiliaries may be enough to defend home territory but  ground-force skill is necessary for conquest; viz. ya cannot win and hold territory against a skilled adversary from the air with militias composed of doubtfully trained amateurs as your only allies on the ground. So who has the US military relied on to provide ground-force skill against ISIS?

ISIS’s conquests in 2014 reconfigured the orientation of the Syrian war which now became a transnational war in the central region. It was already a proxy war pitting Assad’s coalition (Iran, Russia, and the US and its Western allies as a relatively minor participant) against Turkey, Qatar, UAE, and above all Saudi Arabia. All were fighting ISIS; but one of these sides backed other salafi jihadists and the other fought against them while moderates continued to fight Assad (and the salafi jihadists) without furnishing any credible replacement. Superimposed on this were limited Israeli strikes against Hezbollah and Turkish interventions in northern Syria. But US intervention north of the Euphrates was triggered by the need to protect Yazidis of Sinjar from ISIS whose reading of salafism entailed their enslavement (including sexual slavery) that they were promptly carrying out.

US airstrikes on ISIS began immediately; joined by dozens of nations largely on paper. As the US looked for ground force partners the CIA attempted to train a few hundred troops. The attempt was abandoned after dozens of deserters were found to have joined ISIS. Henceforth the US relied on the Kurds to provide reliable ground-force skill against the caliphate north of the Euphrates and Iranian and Iranian-backed forces south of the  Euphrates.

The US military developed a close working relationship with its Kurdish partners on the ground in Syria and Iraq. Organic links grew over time as both US airmen (as well as special forces) and Kurdish ground units learned how to coordinate effectively in the course of the war against ISIS. The peshmerga (and perhaps the YPG) has been trained by the Israeli armed forces. Both the peshmerga and the YPG proved their effectiveness in ground combat and in synergy with US air forces. For gaining control of territory north of the Euphrates without “putting boots on the ground” US armed forces are exclusively dependent on the Kurds. Every competent general knows this. This is why Mattis resigned.

Abandoning the Syrian Kurds is a classic error arising from misguided economic nationalism and geopolitical calculus. Trump probably thinks that the US would gain economically by selling billions of dollars worth of missile defense systems, and geopolitically by preventing “the loss of Turkey” to Russia. He is wrong on both counts. The first is a drop in the bucket—the economic benefit of the sales are trivial. The second is superficial. The Turks have no interest in seeking Russian protection and abandoning what is left of the alliance with the US—they know it is a losing proposition. Any such move would leave them dependent on the Kremlin; an unacceptable position from their pov.

Trump’s abandonment of the Kurds to Turkish depredations has been (very reluctantly) embraced by Stephen Walt. He’s right that the move barely reduces the US military footprint in the Middle East. If it were a part of a strategic rebalancing or retrenchment, one would evaluate it as a component of the grand-strategy as a whole—there’s certainly a case to be made to free-up US resources and attention from the region. But that is decidedly not what is going on. And that brings us to the missing piece in his article. Is the fact that the United States is doing this to bend over backward for Turkey not relevant to the question of whether the move makes sense? Then why does Walt ignore it? And what happens to the rationale once you factor in the quid pro quo?

One has to ask whether the United States should be giving that much quarter to Turkey? (Or Israel or Saudi Arabia for that matter.) Walt agrees that it shouldn’t—indeed his position is that the US should play hard to get and basically end its special relationships. Well then, if the US shouldn’t let Turkey push it around and the withdrawal is not a component of a coherent grand-strategy of retrenchment or rebalancing, then the move is revealed for what it is—incompetent personalistic backroom dealing with little strategic rationale or regard to realities on the ground; a move whose main consequences are to fuck over the Syrian Kurds and reduce US military options.

Foreign policy realism doesn’t mean that ethics are irrelevant; it just means that they enter in the calculus after the options have been winnowed by the strategic filter so to speak. Walt or anyone else for that matter, has failed to demonstrate how any strategic logic trumps the military rationale for sustained partnership with the Kurds, the simple virtue of keeping your commitments, or the obvious ethics of the Kurdish question. Abandoning the Kurds is strategically-misguided, makes life more difficult for US forces, and is obviously ethically-challenged.

Anglo-Saxon powers have fucked over the Kurds for a whole century now beginning with the dismemberment of the Ottoman Empire. The Kurdish question came up immediately. The Iraqi campaign in 1920-1921 when Churchill and gang pioneered air control was actually aerial pacification of the the “mountain Turkmen” [Kurds]. Once Turkey (and much later Iraq) was roped into the Cold War project, it became classified as Turkey’s “internal problem”. During the late-1980s, Reagan had George H. W. Bush and Rumsfeld liaise with Saddam as he gassed the Kurds by the thousands—the Federal Republic supplied the dual-use chemicals.

A major reversal of Anglo-Saxon policy in the Kurdish Question began after Desert Storm. The Iraqi Kurds attained significant autonomy from Baghdad as a result of sustained air protection during the intermittent air war on Iraq during the 1990s (that no one has yet documented properly with the exception of Hersh’s New Yorker dispatches). This complicated things once Saddam was ousted and Iraq became a de facto American colony. Compromises were worked out that left the KRG as a de facto state. Meanwhile, the Turks had been crushing their Kurds for decades. But things had cooled and attained a sort of homeostasis in the years before the Syrian war broke out. In 2012, the Kurdish region was one of the first to become a de facto state. That generated panic in a Turkey already rent by factional struggles.

The past few years have been extremely convoluted. At times it looked as if US-Turkish relations would break down over the Kurdish question. The United States made very explicit promises to the Syrian Kurds, invested significantly in their capabilities, and shielded them from Turkish boot. This was a military necessity … the Kurds being practically the only cohesive armed actor in the war that the US could work with to attain its main politico-military objective of containing salafi jihadism that the United States’s Arab allies had bankrolled and armed to the teeth. By all accounts the Kurds delivered. ISIS was dispatched north of the Euphrates by a combination of Kurdish ground-force skill and close US tactical air support.

Now that they are superfluous again they can be thrown under the bus. Is there some deep geopolitical logic the reenactment of the Kurdish tragedy? Is it a precondition for working US-Turkish relations? I don’t think so. Imho, the United States goes too far to accommodate its odious Middle Eastern allies. In particular, Turkey may have been an important component of the US world position before ICBMs replaced strategic bombers as the main deterrent. But that rationale disappeared by 1970. One can even understand Turkish leverage until the Soviet capitulation. But in the unipolar world, all such rationalizations fall flat on their face. There is no geopolitical reason to trade Kurds for weapons purchases—what are the Turks going to do? become a Russian protectorate?

 

 

 

 

Standard
Thinking

Who Gave Congress the Authority to Forbid Journals from Publishing the Work of Iranian Scholars?

United States Congress has imposed sanctions on Iran that include forbidding academic journals from publishing the work of Iranian scholars. The international scientific publishing group, Taylor and Francis, has rejected papers citing American sanctions as a reason. Indeed, after receiving an acceptance letter for his paper, an Iranian scientist received another communication reversing the decision from Taylor and Francis:

Thank you for your submission to Dynamical Systems.

As a result of our compliance with laws and regulations applied by the UK, US, European Union, and United Nations jurisdictions with respect to countries subject to trade restrictions, it is not possible for us to publish any manuscript authored by researchers based in a country subject to sanction (in this case Iran) in certain cases where restrictions are applied. Following internal sanctions process checks the above referenced manuscript has been identified as falling into this category. Therefore, due to mandatory compliance and regulation instructions, we regret that we unable to proceed with the processing of your paper.

We sincerely apologise for the inconvenience caused.

Should you wish to submit your work to another publication you are free to do so and we wish you every success.

Yours faithfully,

Justin

Sent on behalf of Dynamical Systems

Justin Robinson

Managing Editor | Taylor & Francis | Routledge Journals

Mathematics | Statistics | History of Science | Science, Technology & Society

Obviously this is shitty from the perspective of the scholar. But it is also very concerning if you believe, as I do, that the sanctions are inconsistent with the autonomy of our scholarly institutions. An open society doesn’t reproduce itself. The reenactment of an open society is premised on our willingness to defend elementary norms and practices of free enquiry.

Scholarly journals are expected to be double blind: The identity of the author and the referees are not known to each other. The purpose of this norm/institution/behavioral pattern is to make sure that the merits of the argument trump every other consideration. By discriminating on the basis of the birth lottery, the sanctions throw a wrench into this system in a manner that undermines their legitimacy and autonomy. This is a serious erosion of norms that underwrite free enquiry.

In short, Congress fucked up. Big time.

So what can we do about it?

Strategically speaking, I think that if a major scientific institution were to take up the cause, it could challenge the legitimacy, and indeed legality of such sanctions. Moreover, it has to be an American institution since the whole business is driven by Congressional foreign policy. I think the most appropriate such institution is the American Association for the Advancement of Science (AAAS). I have therefore launched an online campaign to petition AAAS to challenge such sanctions on the grounds mentioned.

Please consider signing the petition and sharing it with your network. This is kinda important. Here’s the link: https://www.change.org/p/american-association-for-the-advancement-of-science-get-aaas-to-challenge-us-sanctions-that-violate-basic-principles-of-free-enquiry. You should also ask your Congresswoman about it.


Postscript. The European Mathematical Society issued a formal condemnation whereupon Taylor and Francis reversed their decision. All it took was a major academic institution to condemn the decision as we had imagined. Whatever happened to compliance with the laws of ‘UK, US, European Union, and United Nations’? What was explained as (read:US) law enforcement turns out to have been … what? … an innovation by the Taylor and Francis legal team? What was placed in the state turns out to have been civil society all along. It was quite satisfactory to see it shot down with such authority! I have obviously declared victory on my change.org petition. Let it be noted that the EMS stepped up.

 

Standard
Thinking

The Recent Mutation Hypothesis of Modern Behavior is Wrong

Klein (1989, 1994, 1999, 2009), Stringer and Gamble (1993), Mellars (1996), Berwick and Chomsky (2016), Tattersall (2017), and others have speculated that a recent mutation related to neural circuitry was responsible for the emergence of modern behavior. (Although Gamble no longer subscribes to the theory.) The mutation is posited to be recent in the sense that it is assumed to have been under selection in African H. sapiens populations after our last common ancestor with other hominin; in particular Neanderthals and Denisovans. The phenotypic trait under selection is assumed to relate to the wiring of the brain since it cannot be identified morphologically. In what follows we shall argue that the recent mutation hypothesis cannot be reconciled with the available body of evidence.

Chomsky’s fundamental insight was that the language capacity was confined to our species and uniform across it. To wit, if you raise a Japanese child in Bangladesh, she will speak Bengali like a native. This implied that under all the massive synchronic (cross-sectional) and diachronic (temporal) variation in languages lies a simple structure for which Chomsky coined the term Universal Grammar. Building on that foundation and a deeper appreciation of the role of stochastic factors in evolution, Berwick and Chomsky (2016) argue that the posited mutation endowed the possessor with an innate capacity for generative grammar (2016, p. 87):

At some time in the very recent past, apparently some time before 80,000 years ago if we can judge from associated symbolic proxies, individuals in a small group of hominids in East Africa underwent a minor biological change that provided [the basic structure of generative grammar] … The innovation had obvious advantages and took over the small group. … In the course of these events, the human capacity took shape, yielding a good part of our “moral and intellectual nature,” in Wallace’s phrase. The outcomes appear to be highly diverse, but they have an essential unity, reflecting the fact that humans are in fundamental respects identical, just as the hypothetical extraterrestrial scientist we conjured up earlier might conclude that there is only one language with minor dialectal variations, primarily—perhaps entirely—in mode of externalization.

The precise mechanism posited by Berwick and Chomsky (2016) sounds plausible. But, as we shall show, it cannot be sustained in light of the entire body of evidence. Even if a mutation was responsible for the human capacity for language, we will argue, it cannot explain the emergence of modern behavior. The weight of the evidence suggests that, if such a mutation indeed occurred, it did so much earlier and was demonstrably shared by the Neanderthals and presumably other big-brained hominin during the Upper Pleistocene, 200-30 Ka.

There are three kinds of evidence of relevance to the question at hand: fossil remains, archaeology, and DNA. Briefly, fossil remains in the form of bone fragments, teeth, postcranial skeletons allow us to reconstruct the morphology of Pleistocene hominins. Hominin taxa (human “species”) are defined with reference to standard taxonomic rules on the basis of morphological criteria (more precisely, automorphies). The precise details need not concern us beyond the fact that hominin taxa, H. sapiensH. neanderthalensis and so on, are defined by morphological traits, not DNA.

The archaeological evidence consists of artifact assemblages that capture the material culture of Pleistocene hominin populations. Modern behavior, in as much as it can be operationalized, has to be pitted against this evidence. The clearest evidence of modern behavior in Pleistocene assemblages is symbolic storage—personal ornaments, decorated tools, engraved bones and stones, grave goods, formal use of space, and perhaps style in lithics. We’ll return to that evidence presently. But note that is a micro-local definition. Modern behavior can also be recognized in the overall dynamism suggested by rapid cultural turnover in assemblage artifacts. Indeed, the very idea of modern behavior comes from the contrast between later dynamism and the extraordinary monotony of Oldowan and Acheulean lithic assemblages during the first two million years after the rise of the genus Homo. Oldowan stone tools from c. 2.5 Ma show very little variation over a million years even as an adaptive radiation sends H. erectus to expand into Asia (‘Out of Africa I’). Virtually identical Acheulean handaxes were manufactured for another million, 1.5-0.5 Ma, throughout the western Old World. (It is speculated that bamboo tools replaced handaxes east of the Movius Line.)

Movious Line

All of this begins to change in the Upper Pleistocene, 200-30 Ka (we prefer the more neutral climatic label modified by the stratigraphic term ‘Upper’ for ‘Late’ which serves to clarify the archeological criteria being used to define the period). Lithic assemblages suddenly begin to display turnover in style and regional variation. The complexity of artifactual assemblages starts to accelerate. With the Upper Paleolithic in western Eurasia at the end of the Upper Pleistocene, we get the “full package” of modern behavior described by Diamond as “the Great Leap Forward.” It is this (eurocentric) body of evidence that suggests a decisive structural break in behavioral patterns that the neural mutation hypothesis is mobilized to explain. If true, the hypothesis would indeed explain this broad diachronic pattern in Pleistocene assemblages in the western Old World. But as we shall see, the hypothesis runs into insurmountable difficulties when broader diachronic and synchronic patterns are examined with a finer brush.

Great hopes were once placed on DNA. It was assumed that the problem of identifying the molecular basis of many if not most phenotypic traits would be solved sooner or later. All such hopes have since been dashed. The problem has turned out to be much more complex than previously imagined. What DNA has turned out to be good for is identification (DNA as molecular fingerprinting), uncovering phylogenetic relationships (who is closer to whom ancestrally), and importantly for us—with DNA recovered from ancient fossils—population history.

It is extremely difficult to make kosher inferences about Pleistocene history from the DNA of contemporary populations because of what Lahr (2016) calls ‘the Holocene Filter’ 15-0 Ka. Human population exploded a thousandfold from a few million c. 15 Ka to a few billion today. Genetic evolution accelerated to 100 times the rate that had prevailed prior to the Neolithic. And massive ‘population pulses’ (such as the massive migrations that followed the Neolithic c. 9 Ka and Secondary Products Revolution c. 5 Ka) transformed the population structure of macroregions, replacing ancient paleodemes of Pleistocene hunter-gatherers by mixed populations, with later arrivals as the predominant element (eg, the Neolithic and Yamnaya pulses in western Eurasia and the Bantu expansion in southern Africa). For all these reasons, modern populations cannot be identified with ancient populations; not locally, not regionally, and not globally. Pleistocene hominin populations, including anatomically modern humans, were very different from contemporary populations in both phenotypic traits and DNA. This means in particular that the frequency of modern-archaic admixture cannot be reliably ascertained from the DNA of living populations. We must rely instead on ancient DNA.

We cannot be sure if other taxa in the genus Homo had the capacity for generative grammar. But if we are to run a horse race in behavioral traits between “moderns” and “archaics”, we must have a level playing field. The period of extraordinary monotony in lithic assemblages is one of exponential (“hockey-stick”) encephalization. Aiello and Wheeler (1995) have shown that the brain and the gut are the heaviest consumers of metabolic energy; so that for brains to grow, guts had to shrink. This required a shift to higher quality foods; in particular, carnivory (first through scavenging and later through hunting; although true hunting did not emerge until much later). Moreover, it was accompanied by bigger bodies with greater metabolic demand. Bigger brains and bodies in turn required more complex (and need we say, successful) hunting and foraging behavior.

gut_brain_cycle_aiello_wheeler_1995.png

Source: Aiello and Wheeler (1995)

Dunbar (1998)’s social brain hypothesis tied the neocortex ratio of mammals to the size of their social network, suggesting that it was the computational demands of social life that selected for encephalization. At any rate, the powerful feedback loop generated runaway encephalization culminating in the Upper Pleistocene with big-brained (~1400 cc), big-bodied hominin like ourselves.

hominin_cc

Source: Spocter (2007). Note the log scale of the time axis; without it we have the familiar pattern of the hockey-stick.

So the genus Homo is too big for the purpose. It includes taxa with brains not much larger than chimps as well as taxa with brains bigger than ours. Clearly, we must compare H. sapiens with other encephalized hominin. We therefore introduce an Encephalization Filter that rules out all but the late archaic hominin. See next figure.

Screen Shot 2018-11-29 at 12.43.46 PM.png

What all this means is that in order to compare apples to apples, we must restrict the test universe to Upper Pleistocene Homo. Otherwise we might as well compare modern day New Yorkers to Neanderthals and conclude that the latter are nonmodern in their behavior (and, by implication, subhuman tout court). The acid test of the recent mutation hypothesis is whether we exhibited more modern behavior than the others in our period of overlap. In sum, the relevant test universe consists of clusters of paleo-demes (prehistoric geographically-situated populations) from multiple taxa in the genus Homo during the Upper Pleistocene, 200-30 Ka.

Ultimately, whether we call these taxa species or subspecies is irrelevant—they are defined by their automorphies. Still, Wolpoff and Caspari wonder if the Upper Pleistocene taxa now called “species” don’t satisfy the textbook definition of subspecies. Indeed Mayr has shown that the average splitting time for speciation is in millions of years. According to Henneberg (1995), the doyen of hominin taxonomy, ‘until thoroughly falsified, the Single Hominid Lineage Hypothesis provides less complicated description of hominid evolution … It is also compatible with the uniformitarian postulate — the recent human evolution occurred, and occurs, within one lineage consisting of widely dispersed but interacting populations.’ These frames are quite consistent with each other.

Within the single human lineage that is called the genus Homo there has always been a bushy tree with population structure. There is no reason to believe that paleo-demes were completely genetically isolated from each other. Demes exchange genes at the margins of ecoregions which is how gene flow is maintained within a geographically dispersed species. The point is that under Pleistocene levels of population density and the attendant isolation-by-distance, there obtained intercontinental population structure (ie continental races or discrete subspecific variants) of the sort that has no modern counterpart. Simply put, continental populations during the Upper Pleistocene were considerably further away from each other in neutral phenotypic and genotypic distance than they are today or were in the ethnographic present. Wolpoff and Caspari lost their big battle over the model for the ethnographic present in favor of the ‘(Mostly) Out of Africa’ model. But their multiregional model works nicely for Upper Pleistocene population structure. As Zilhão notes in the European context, ‘it is highly unlikely that the Neandertal-sapiens split involved differentiation at the biospecies level.’

Hominin taxa now called “species” are better conceptualized as geographic clusters of paleo-demes situated in macroregions under relatively severe isolation and therefore derivation; maybe not enough derivation for speciation but perhaps enough for raciation (400-800 Ka, Cf. the San splitting time from the rest of the extant human race ~280-160 Ka).

Reich

Source: Reich (2018)

During the Upper Pleistocene, Neanderthals were endemic in Europe and the steppe as far as Altai; extending their occupation for extended periods to southwest Asia. Denisovans are thought to be endemic to steppe and Sunda. A hominin population descended from H. erectus was endemic in Asia and Sunda. Further out, H. floresiensis (“the Hobbit”) survived in insular isolation on Flores. Reich reports that there was at least one other “ghost population” in Eurasia whose fossils have yet to be identified but whose genetic signal is evident in ancient DNA from admixture. H. sapiens was endemic in Africa where it shared the continent with yet other ghost populations. (On population structure in prehistoric Africa see Scerri et al. 2018).

So what is the evidence for modern behavior for these taxa during the Upper Pleistocene? So far we only have reliable archaeological evidence for Neanderthals and anatomically modern humans.

Gamble (2013) characterizes the Neanderthal dispersal in western Eurasia as ‘an adaptive radiation based on projectile technology and enhanced carnivory with prime-age prey the target’ c. 300-200 Ka. Not only did Neanderthals invent hafting, Upper Pleistocene assemblages associated with Neanderthals exhibit disproportionately higher frequencies of prime age prey compared to anatomically modern humans, suggesting that they were more competent hunters. In the same assemblages we find the first evidence of managed fire. In a separate analysis, Zilhão (2007) notes that

Chemical analysis of two fragments of birch bark pitch used for the hafting of stone knives and directly dated to >44 ka 14C BP showed that the pitch had been produced through a several-hour-long smoldering process requiring a strict manufacture protocol, i.e., under exclusion of oxygen and at tightly controlled temperatures (between 340 and 400ºC) (Koller et al., 2001). The Königsaue pitch is the first artificial raw material in the history of humankind, and this unique example of Pleistocene high-tech clearly could not have been developed, transmitted, and maintained in the absence of abstract thinking and language as we know them; it certainly requires the enhanced working memory whose acquisition, according to Coolidge and Wynn (2005), is the hallmark of modern cognition.

Gowlett (2010) suggest that Neanderthal hearths were a game changer. Not only did fire act as an external stomach by breaking down enzymes in meat thus reducing gut loads, it also extended the day for social interactions. More compelling evidence on Neanderthal social life comes from ‘super sites’ recognized from assemblages with massive deposits of debris accumulated over very long periods of time. ‘They marked,’ Gamble (2013) notes,

… a shift in hominin imagination from conceiving the world in a horizontal manner to stacking it vertically – a representation of accumulated time. … Once established, the super-site niche set the stage for thinking differently about places. They now formed part of the hominins’ distributed cognition.

The supersites suggest that Neanderthals had indeed ‘discovered society’; surely an important consideration in the Neanderthal question—whether they were biologically capable of modern behavior. Surely if they had complex social lives it becomes hard to classify them as subhuman.

However, the litmus test for artifact assemblages is evidence of symbolic storage. Shea (2011) seconds João Zilhão and Francesco d’Errico that ‘finds of mineral pigments, perforated beads, burials and artifact-style variation associated with Neanderthals challenge the hypothesis that symbol use, or anything else for that matter, was responsible for a quality of behavioral modernity unique to Homo sapiens.’ Caspari et al. (2017) note that,

Neanderthals were capable of complex behaviors reflecting symbolic thought, including the use of pigments, jewelry, and feathers and raptor claws as ornaments. All of this is probably evidence of symbolic social signaling, complementing the evidence of complexity of thought demonstrated by the multi-stage production of Levallois tools. Neanderthals were human.

But problems for the recent mutation hypothesis don’t end with behaviorally modern Neanderthals. Shea (2011) explains how any joint examination of the diachronic and synchronic pattern of the actual behavior of anatomically modern humans as they dispersed from Africa undermines the recency hypothesis:

Evidence is clear that early [anatomically modern] humans dispersed out of Africa to southern Asia before 40,000 years ago. Similar modern-looking human fossils found in the Skhul and Qafzeh caves in Israel date to 80,000 to 120,000 years ago. Homo sapiens fossils dating to 100,000 years ago have been recovered from Zhiren Cave in China. In Australia, evidence for a human presence dates to at least 42,000 years ago. Nothing like a human revolution precedes Homo sapiens’ first appearances in any of these regions. And all these Homo sapiens fossils were found with either Lower or Middle Paleolithic stone tool industries.

Shea (2011) appreciates the evidence from Howiesons Poort c. 90-70 Ka and other sites of early ‘efflorescences’. But that deepens the paradox. For

… [if] behavioral modernity were both a derived condition and a landmark development in the course of human history, one would hardly expect it to disappear for prolonged periods in our species’ evolutionary history.

Habgood (2007) examined the evidence for modern behavior in Pleistocene Sahul and concludes:

The proposal by McBrearty and Brooks (2000) and Mellars (2006) that the complete package of modern human behaviour was exported from Africa to other regions of the Old World and ultimately into Greater Australia between 40–60ka BP is not supported by the late Pleistocene archaeological record from Sahul….

O’Connell and Allen (2007) put things more bluntly. Sahul’s archaeological record, in their opinion, ‘does not appear to be the product of modern human behaviour as such products are conventionally defined.’ Equally sensitive observations may be made for southern Africa, southern Eurasia, eastern Eurasia, or Sunda, where modernity is not evident until 30-10 Ka, ie tens of thousands of years after the arrival of “moderns” (if not later).

The greatest problem with the recent mutation hypothesis might very well be that it necessarily implies a modern-nonmodern dichotomy since you either have the trait for generative grammar or you don’t. In that sense it is very much like Krugman’s It theory of global polarization. This dichotomy pronounces not only other archaic hominin but also many anatomically modern humans to be nonmodern and hence not fully human well into the ethnographic present. Indeed, it has proven exceedingly difficult to quarantine Neanderthals from anatomically modern humans for there is no hyperplane in the space of behavioral traits that includes both hunter-gatherer societies in the ethnographic present and excludes Neanderthals. As Zilhão (2011) notes,

Many have attempted to define a specifically “modern human behavior” as opposed to a specifically “Neandertal behavior,” and all have met with a similar result: No such definition exists that does not end up defining some modern humans as behaviorally Neandertal and some Neandertal groups as behaviorally modern. [Emphasis added.]

In a comment on Henshilwood and Marean (2003), Zilhão captures the essence of the conundrum induced by the implied dichotomy:

… the real problem is that (1) archeologically visible behavioral criteria designed to include under the umbrella of “modernity” all human societies of the historical and ethnographic present are also shared by some societies of anatomically nonmodern people and (2) archeologically visible behavioral criteria designed to exclude from “modernity” all known societies of anatomically nonmodern people also exclude some societies of the historical and ethnographic present.

We can think of this as “merely” a political issue and tell ourselves that we should “just do the science; let the chips fall where they may.” Or we may ask ourselves if such essentialist schema are doing as much explanatory work as they are supposed to.

At this point we can anticipate the following rebuttal:

Why is it such a big deal if the efflorescences get extinguished or paleo-demes fail to exhibit modern behavior for tens of thousands of years after the posited mutation? Since the capacity for generative grammar is uniform across the species, no denial of modernity is implied since any evidence of modern behavior by one paleo-deme in the species is enough to confirm the potentiality for modern behavior for all demes in the species. These may appear in a staggered manner. But that can be easily explained by adding that the capacity for generative language may only be a necessary condition for modern behavior and that other things would also have to fall into place before any ‘full flowering’ obtains.

This is a fair response. And we speculate that something like this was indeed the case. But notice that in light of the evidence relayed above, the “one-strike-in” rule for taxa immediately implies that Neanderthals were behaviorally modern as well. This is indeed what we have been arguing all along. And if that is the case then the mutation had to have occurred way, way earlier—at least hundreds of thousands of years before the 80 Ka posited by Berwick and Chomsky (2016). This interpretation is in line with the conclusions offered in a recent volume edited by Brantingham et al. (2004) published under the title The Early Upper Paleolithic beyond Western Europe:

If there is a common evolutionary cause, phylogentic [sic] or otherwise, it is rooted much deeper in evolutionary time and is largely independent of the events tracked in the Middle-Upper Paleolithic transition. [Emphasis added.]

Zilhão (2007) concurs with the last assessment offering 400 Ka as a plausible date of arrival of the biological wetware for modern behavior. But if that is true, then it raises the daunting question of why said mutation did not generate an adaptive radiation. Perhaps it corresponds to the adaptive radiation associated with the dispersal of H. heidelbergensis 600-400 Ka (‘Out of Africa II’) that Gamble (2013) ties to mastery of controlled fire. But that possibility is unlikely to satisfy those who seek to explain ‘the revolution that wasn’t’ (McBrearty and Brooks 2000; the reference is to “the Human Revolution”) for a genetic mutation c. ~400 Ka cannot then explain the adaptive radiation evident c. ~50 Ka.

Bar-Yosef (2002) suggests that a better analogy for modern behavior would be the Neolithic Revolution, with its staggered appearance. Indeed, what is manifest is that global polarization made its first appearance not in the Neolithic but during the Upper Pleistocene. The diachronic and synchronic patterns revealed by archaeology are consistent with the framing of modern behavior during the Pleistocene as an emergent form of sociocultural complexity that makes a staggered appearance and therefore necessarily generates global polarization.  It was the First Great Divergence whose roots may be sought perhaps in biogeography.

ET

Source: Gamble (2013)

The weight of biogeography is well captured by Bailey (1960)’s Effective Temperature (ET) that measures both the basic thermal parameter of the ecoregion and the length of the growing season. Binford (2001) shows how ET structures the lifeworld of hunter-gatherers; opening some doors while closing others; rigging the dice in favor of some and against others. For as Gamble (2013) notes mercilessly, ‘the low latitudes were good areas to disperse away from …’. Proctor (2003) acidly calls this discourse “Out of Africa! Thank God!” But is there any doubt that escaping the tyranny of the isotherms was an unambiguous advantage? as it has been ever since? Our position is that the antiracist suspicion of biogeography is completely mistaken. Biogeography is not racist but rather an alternative to racialism as an explanatory schema for global polarization.

What of the fate of the Neanderthals? The weight of the evidence suggests that they were not wiped out but rather absorbed into the much larger incoming populations of anatomically modern humans. They may have been as dynamic as H. sapiens. But there was a decisive difference. Our race had much longer life histories. This suggests that the adaptive radiation witnessed c. ~50 Ka may be related to runaway K-selection in anatomically modern humans. With grandparents around to pass on know-how and know-where the fidelity of intergenerational transmission of acquired innovations must have been higher. And larger populations could be expected to generate more innovations. It may be demographic and life history variables that explain the diachronic and synchronic patterns of our deep history.

Caspari.png

Source: Caspari and Wolpoff.

 

 

 

 

Standard
Thinking

Population History, Climatic Adaptation, and Cranial Morphology

Physical anthropology in general and craniology in particular have quite a sordid history. The size of the skull was of great interest to scientific racialists, who seized on minor differences in averages as evidence of differential capacity of the “races” for civilization. As we have shown before and we shall see again in what follows, race gives us a very poor handle on cranial morphology, and human morphology more generally. This does not mean that there is little systematic variation; that all variation in between individuals within populations. To the contrary, human morphology is geographically patterned. Situated local populations or demes differ systematically and significantly from each other, generating smooth geographic clines. In particular, as the next figure shows, our species obeys Bergmann’s rule—bigger variants are found in colder climes.  

Ruff(1994)

Source: Ruff (1994)

Of course, skull size and body size scale together. We can read this off the US military anthropometric database. So we should expect the bigger people of colder climes to have bigger skulls as well. Indeed, if Ruff’s thermoregulatory theory is right, particular features of skull morphology should be adapted to the macroclimate even more than the postcranial skeleton (the bit below the head).  

It is easy enough to recover the monotonic relationship between climatic variables and cranial measurements. But there is a very serious problem. Suppose that we can rule out nutritional and other environmental influence, say because we know the parameter is very slow-moving. That does not mean that all systematic variation in that parameter is then due to the bioclimate. For it may instead be due to random drift. Indeed, we know that founder effects, isolation-by-distance, and genetic drift can generate geographic clines that confound the bioclimatic signal. There is mounting evidence that human morphology, just like the human genome and linguistic diversity, contains a strong population history signal. 

Population history signals in DNA, RBC polymorphisms, and craniometrics are congruent.

Segments of DNA under selection do not preserve the signal from population history well; the bit not under selection—”junk DNA”—does. Similarly, information on population history can be recovered from traits of cranial morphology that are neutral, ie not under selection. Scholars have used genetic distance to control for population history in order to recover information on bioclimatic adaptation from morphology. In what follows, I will show how we can use cranial morphology itself to the same effect.

Howells Craniometric Dataset contains 82 linear measurements 2,524 skulls from 30 populations located in five continents. We will be working exclusively with that data. Let us begin with skull size. The five continental “races” that Wade thinks are real—one each for Europe, Africa, Asia, Australia, and the Americas—explain 6 percent of the variation in skull size. Moreover, not even one of the mean differences between said “races” is statistically significant. See Table 1. 

Table 1. Mean cranial capacity by continental “race”.
ContinentMale meantStatStd
Europe1,3530.27573
Pacific1,3480.22097
Africa1,281-0.45895
Asia1,3340.074101
America1,322-0.04591
Source: Howells Craniometric Dataset. Estimates in Italics are insignificantly different from the global mean at the 5 percent level. 

While demes are lumpy, they can’t be called races; there are tens of thousands of them. And though races are useless fictions, demes or situated local populations are very useful fictions. Put another way, demes are to anthropology what particles are to physicists and representative rational agents are to economists. Recall the tyranny of distance over land before the late-nineteenth century. Under such conditions as prevailed well into the ethnographic present, populations were situated locally and isolated from nearby and far away demes; the latter more than the former and smoothly so. Dummies for the demes alone explain 44 percent of the variation. What needs to be explained is this systematic component. 

In order to understand variation in skull size we first note that Binford’s Effective Temperature (ET, estimated as a linear function of latitude) is a strong correlate of skull size (r=-0.547 for men, r=-0.472 for women) and orbital size, ie the size of the eye socket (r=-0.464 for men, r=-0.522 for women). See the top panel of next figure. Moreover, as you can see the bottom-left graph, orbital size is a strong predictor of skull size (r=0.506); raising an intriguing pathway of selective pressure that ties cranial variation not to thermal parameters but to variation in light conditions. Still, the bottom-right graph shows that ET is correlated with skull size even after controlling for orbital size. 

If we consider only the systematic component, orbital size alone explains half the variation in skull size. So we should try to understand variation in this mediating variable as well. 

A straightforward way to extract the population history signal is to isolate morphological parameters that are uncorrelated with climatic variables and use these to construct a neutral phenotypic distance measure. We identity 14 linear measurements in the dataset that are uncorrelated or very weakly correlated with ET. Assuming that these are neutral traits not under selection, we use them to compute phenotypic distance from the San. (We use Pearson’s correlation coefficient between standardized 14-vectors for the San and each of the 30 populations as our measure of phenotypic distance.) Basically we are using the fact that the San are known to have been the first to diverge from the rest of us so that the degree of correlation in these neutral measures contains information on the genetic distance between the two populations. We also know that genetic distance is proportional to geographic distance from sub-Saharan Africa due to our specific population history. So if our measure is capturing population history, it should be correlated with geographic distance. The next figure shows that this is indeed the case. 

If we are right about our reasoning, we are now in a position to decompose morphological variation into neutral, bioclimatic and non-systematic factors. We begin with our estimates of pairwise correlation between our neutral factor (phenotypic distance from San) and ET on the one hand and selected morphological variables on the other. 

Table 2. Spearman’s correlation coefficients.
  Skull sizeOrbital sizeCranial IndexNasal Index
MenNeutral-0.488-0.5750.2150.257
ET-0.547-0.464-0.0610.583
WomenNeutral-0.437-0.5700.1170.301
ET-0.472-0.522-0.0250.543
Source: Howells Craniometric Dataset. Estimates in Bold are significant at the 5 percent level. 

We see that Cranial Index (head breadth/head length) is uncorrelated with both ET and Neutral, while orbital size and skull size are strongly correlated with both. Interestingly, the Nasal Index (nasal breadth/nasal length) is a strong correlate of ET but not our neutral factor, implying that nasal morphology contains a strong bioclimatic signal and a weak population history signal. These results are only suggestive however. In order to nail down the bioclimatic signal we must control for population history and vice-versa. 

Table 3. Spearman’s partial correlation coefficients.
  Skull sizeOrbital sizeCranial IndexNasal Index
Neutral controlling for ETMen-0.533-0.6090.2210.260
Women-0.488-0.6590.1180.348
ET controlling for neutralMen-0.584-0.513-0.0800.584
Women-0.517-0.625-0.0270.564
Source: Howells Craniometric Dataset. Estimates in Bold are significant at the 5 percent level.

We see that both population history and bioclimatic signals are present in skull size and orbital size; neither is present in the Cranial Index; and only the bioclimatic signal is present in the Nasal Index. This is consistent with known results in the field. Not just the Nasal Index but a bunch of other traits in facial morphology exhibit a strong bioclimatic signal, suggesting strong selective pressure on the only part of the human body exposed to the elements even in winter gear and even in the circumpolar region. 

Table 4 shows the percentage of variation explained by phenotypic distance from the San and ET. We see that Neutral and ET explain roughly 11 percent of the variation in skull size and orbital size each; neither explains CI; and ET explains 15.6 percent of the variation in the Nasal Index. More than three-fourths of the variation in these variables in not explained by either. 

Table 4. Apportionment of individual craniometric variation.
 Skull sizeOrbital sizeCranial IndexNasal Index
Neutral11.1%11.7%1.5%4.5%
ET11.8%10.2%1.2%15.6%
Error77.1%78.1%97.3%80.0%
Source: Howells Craniometric Dataset. OLS-ANOVA estimates after controlling for sex. 

Table 5 displays the portion of systematic (interdeme or interpopulation) variation explained by population history and ET. Interestingly, the population history signal is stronger than the bioclimatic signal for systematic variation in skull size and especially orbital size. Neutral phenotypic distance from the San, our population history variable, explains 35 percent of the systematic variation in orbital size and 28 percent in skull size. ET explains 22 percent in both. Population history and ET explain more than half the systematic variation in both size variables. ET explains 27 percent of the systematic variation in the Nasal Index likely reflecting morphological adaptation to the macroclimate. The less said about the Cranial Index the better. 

Table 5. Apportionment of systematic craniometric variation.
 Skull sizeOrbital sizeCranial IndexNasal Index
ET24.0%22.1%1.5%28.5%
Neutral26.9%36.8%4.2%10.0%
Error49.1%41.1%94.4%61.5%
Source: Howells Craniometric Dataset. OLS estimates adjusting for sex-ratio.

Remarkably, both ET and population history’s average share is more than a sixth but shy of a fifth, adding up to 37 percent of systematic variation of all four variables. If we drop the Cranial Index and average the other three morphological variables, they explain 24 percent of the variation each. In the horse race between population history and ET, we have a draw. The balance is more uneven in cranial size variables where population history has the upper hand. What happens if we introduce race dummies? 

Table 6. Apportionment of systematic craniometric variation.
 Skull sizeOrbital sizeCranial IndexNasal Index
ET23.8%31.2%1.1%16.3%
Neutral24.4%19.5%1.2%5.1%
Europe dummy4.1%0.0%0.7%7.0%
Asia dummy0.0%0.0%16.5%6.1%
America dummy0.0%3.4%0.2%3.9%
Pacific dummy0.3%0.9%0.9%0.5%
Error47.4%45.0%79.5%61.3%
Source: Howells Craniometric Dataset. Estimates in bold are significant and those in italics are insignificant at the 5 percent level. OLS estimates adjusting for sex-ratio.

We see that race is pretty much a useless fiction. It gives us no handle at all on craniometric variation. The best we can say is that Asian heads are more globular. Interestingly, ET and population history exchange rankings in explaining orbital size after controlling for race. But the overall picture is unchanged. 

In order to be sure than we are not picking up spurious correlations, we fit linear mixed-effects models. We allow for random-effects by deme and admit fixed-effects for sex and race. We report the number of continental race dummies (out of four) that are significant in each regression. 

Table 6. Linear mixed-effects model estimates.
 Skull sizeOrbital sizeCranial IndexNasal Index
InterceptYesYesYesYes
Sex dummyYesYesYesYes
Deme random effectYesYesYesYes
Neutral-7.153-1.2080.9722.620
ET-0.895-0.209-0.1550.590
Number of race dummies significant0012
Source: Howells Craniometric Dataset. Estimates in bold are significant at the 5 percent level.

Our main results are robust to the inclusion of random effects for demes. The Cranial Index is bunk. The Nasal Index contains a strong bioclimatic signal but an insignificant population history signal. The gradients of ET and phenotypic distance from the San are significant for skull size and orbital size. Note that the sex dummy is always significant due to dimorphism—the dimorphism index for skull size in the dataset is around 1.15. But race dummies are rarely significant. Indeed, of the 16 dummies for race in the above regressions only 3 were significant. And these had mostly to do with “the wrong latitude problem”: New World population morphology can be expected to be adapted to the paleoclimate of Siberia so that it is not surprising that the coefficient parameter of the dummy would absorb that systematic error. 

The results presented above are congruent with known results from dental, cranial, and postcranial morphology. The basic picture that is emerging suggests that some skeletal traits are developmentally-plastic so that they reflect health status (eg stature, femur length); some are selectively neutral (eg temporal bone, basicranium, molars) so that they can be used to track population history; and finally, some have been under selection and likely reflect bioclimatic adaptation (eg, nasal shape, orbital size, skull size, pelvic bone width). 

In the 1990s and the early 2000s there was a sort of panic in physical anthropology related to genetics. The genomic revolution threatened to put people out of business. But it has become increasingly clear that the genomic revolution has turned out to be a dud. Most efforts to tie phenotypic variation to genomic variation have failed utterly. So far the best use of DNA for understanding human variation has turned out to be just a fancy version fingerprinting. So if you have ancient DNA samples, you can track population history. It has since been shown that morphological variation itself can be used to track population history just as effectively as DNA markers. With the advent of new techniques such as geometric morphometrics, the resurgence of interest in understanding morphological variation, and the manifest failure of DNA as the key to understanding variation in human morphology, we are truly in the midst of an unannounced golden age in physical anthropology. 


In lieu of references: See the splendid work by, among others, Brace (1980), Beals (1983, 1984), Ruff (1994), Relethford (2004, 2010, 2017), Roseman (2004), Harvati and Waever (2006), von Cramon-Taubadel (2014), and Betti et al. (2010). 

 

Standard
Thinking

When Was the Industrial Revolution?

Metrics of everyday living standards are problematic. Commonly used economic statistics like real median income, real median household consumption, real per capita income et cetera rely on fallible national economic statistics. Above all, National Income Accounting may be blind to integral aspects of the standard of living. Accounts may be fudged by governments in countries with weak independent institutions. Finally, such statistics rely on judgements encoded in adjustments for representative consumption bundles, purchasing power and effective exchange rates. Of course, the entire enterprise relies quite heavily on assumptions about the plausibility of reducing human well-being to consumption bundles.

Anthropometric alternatives such as stature and BMI are confounded by morphological adaptation to the paleoclimate. Bigger bodies generate more heat so that situated populations adapted to warmer climes tend to be smaller than those adapted to colder climes in accordance with Bergmann’s rule. This means that the cross-sectional variation of stature and BMI cannot be interpreted straightforwardly as reflecting differences in everyday living standards. However, time-variation in anthropometric measures (and the cross-section of dynamic quantities) can be usefully interpreted as measuring changes in living standards. To wit, the Dutch-Indian difference in contemporary stature is less reliable than the Dutch-Indian difference in gains in stature (say, over the past century).

Actuarial alternatives are more promising. Mortality and morbidity data capture health insults that are directly indicative of net nutritional status. Since the latter is an irreducibly joint function of disease environment and nutritional intake, it goes to the heart of everyday living standards. Actuarial alternatives such as life expectancy are not confounded by adaptation to the paleoclimate since there is no equivalent of Bergmann’s rule for life history variables. Instead variables such as life expectancy capture contemporaneous environmental burdens—epidemiological and thermal—that are indeed of interest to those investigating variation in living standards.

Table 1a. Effective Temperature and Living Standards.
Effective Temperature PCGDP Stature (cm) Life Expectancy
ET < 14 23,537 174 76
14 < ET < 16 12,526 171 73
ET > 16 8,439 167 68
Source: Clio Infra, Binford (2001), author’s computations. Population-weighted means for N=99 countries. 

The above differences in the variables explain why Stature (r=-0.736, p<0.001) is a stronger correlate of Effective Temperature (ET) than Life Expectancy (r=-0.360, p<0.001) and PCGDP (r=-0.378, p<0.001). It also explains why controlling for income, ET is uncorrelated with Life Expectancy (t-Stat=-1.5) but not stature (t-Stat=-8.0). Whatever causal effect ET has on Life Expectancy is explained by variation in per capita income. This is not true of stature presumably because ET is correlated with variation in the paleoclimate which is causally related to stature and other body size variables via Bergmann’s rule.

Screen Shot 2018-11-10 at 12.50.06 AM.png

Parenthetically, we note that if we use Binford’s thresholds for storage (ET=15.25) and terrestrial plant dependence (ET=12.75), then we obtain a version of Table 1a that is less effective at partitioning modern societies by living standards. See Table 2b below. The map above displays Binford’s thresholds.

Table 2b. Effective Temperature and Living Standards.
Effective Temperature PCGDP Stature (cm) Life Expectancy
ET < 12.75 22,012 174 75
12.75 < ET < 15.25 24,164 174 78
ET > 15.25 8,781 167 68
Source: Clio Infra, Binford (2001), author’s computations. 

ET is a linear function of absolute latitude (r=-0.944, p<0.001). ET is meant to capture the basic thermal parameter of the macroclimate. Together temperature, precipitation and topography (elevation, terrain, soil, drainage) structure the ecology of situated populations in the ethnographic present just as they did in prehistory. Economic history, prehistory, and anthropology are not as far from each other as they seem. But we have digressed far enough. Let us return to living standards in Britain.

precipitation_g.jpg

If you accept my argument that life expectancy is the best measure of everyday living standards we have, then the transformation of British living standards can be dated quite precisely. The essence of the Malthusian Trap was that real gains in living standards could not be sustained. Given the energetic constraints of preindustrial economies, population growth wiped them out. Thus we find that forty was a sort of rough upper bound on British life expectancy under the Malthusian Trap. The British Industrial Revolution, 1760-1830, had no discernable impact on British Life Expectancy. It is only in 1870 that British life expectancy begins to pull away from forty. Fifty was only breached in 1907; sixty in 1930; seventy in 1950; and eighty in 2000. Britons could expect to live twice as long at the end of the 20th century as in 1870 or 1550. 20 of the 40 years in life expectancy gained over the past 150 years were gained in the 40 year period 1910-1950; 10 have been gained in the 68 years since 1950; and 10 were gained in the first 40 years of the secondary revolution, 1870-1910. 1910-1950 is the hockey-stick that takes you from the turn of the century classical to the mid-century modern.

GreatDivergence

The evidence from stature is also consistent with this periodization. The problem with using body size variables like stature is that, unlike life expectancy, we don’t have a Malthusian ballpark against which to judge modern morphology. As I explained, European body size over the very long run is explained by population history. European gracialization (shrinking bodies) and decephalization (shrinking brains) since the medieval period is an active area of investigation, although still poorly understood.

body_sizeHowever, time-variation of stature in the ethnographic present can be interpreted as measuring time-variation in everyday living standards. That is all we really need to date the departure. And that too points to the last quarter of the nineteenth century as the beginning of the divergence. Most of the gains in stature were concentrated in the period 1920-1960, corroborating the finding from British life expectancy. The hockey-stick is a story of the early-twentieth century.

Stature

The empirical evidence from both anthropometric and actuarial metrics suggest that it is time to cut the British Industrial Revolution down to size. It is time to recognize it for what it was: a “revolution” largely confined to cotton textile manufacturing that pointedly failed to transform everyday living standards in Britain. The real departure came with the secondary industrial revolution, 1870-1970, that was not confined to Britain but a rather transatlantic affair. It witnessed the generalized application of machinery powered by fossil fuels to perform work everywhere from farms to factories. More generally, it is characterized above all by the increasingly ubiquitous application of science and technology to concrete problems.

But there was much more at play than technology and knowhow. For it involved a massive integration of the globe that, as Geyer and Bright put it, destroyed the capacity of the world’s macroregions to sustain autonomous histories. This onset of their ‘global condition’ takes places in the middle decades of the nineteenth century. The key to this transformation was rail. Sail was competitive with steam on the open ocean through the nineteenth century. The topology of the world economy thus couldn’t have been transformed by cheap and efficient transport by steam ships because sailing ships were already cheap and efficient.

The disconnectedness of the world economy was not a function of weak connections between macroregions. Instead it was local; defined by the tyranny of distance in the interiors of the great landmasses of worlds old and new. Until the advent of rail, transport over land was prohibitively expensive; condemning lands far from waterways to insulation. The sea-borne world economy was correspondingly limited to the maritime world. A larger, more integrated and more intrusive world economy emerged with rail that allowed the bounty of the interior to be sold on the world market. The international division of labor that emerged on this iron frame had much more bite than the one that characterized the world economy confined to the maritime world.

Ghost acres had little bearing on the British kitchen table until the late-nineteenth century. To be sure, Britons had been addicted to imported drug foods (sugar, tea, coffee, tobacco) from slave plantations for centuries. But as late as 1870, only 10 percent of British meat was imported. By 1910, Britain was importing 40 percent; largely beef from Argentina and lamb from New Zealand. The ghost acres finally increased the proportion of high quality foods in the British diet. Recall that beef is extraordinarily land-intensive. In the present day US, according to a recent study, producing one calorie (Mcal=1000 kcals) of beef requires 147 square meters of land compared to just 5 square meters for chicken and pork. Since land productivity was considerably lower than today, beef must have been even more land-intensive that it is today. The ghost acres were thus absolutely necessary for the transformation of British diets and therefore British living standards.

So Pomeranz is right about the ghost acres but wrong about the timing. Ghost acres did not transform British diets until the last quarter of the nineteenth century. As I suggested in the great British meat trade, the transformation of British living standards required not only the opening of the American interior but also an instance of definite technical solutions that make up the secondary/real industrial revolution: in this case solving the problem of transoceanic mechanical refrigeration. Chicago could not monopolize the British beef trade in 1880s and in the 1900s Argentina could not replace the US as a supplier in the British beef trade without the chilling solution. So I am not saying that rail was sufficient. What I am saying is that rail was necessary. Moreover, the British beef trade was ultimately based on the harvesting of great pastures in the interior of the New World. This required rail not only in the Anglo newlands but also in Argentina.

The opening of the interiors also required great migrations from the two Anglo oldlands. It also required the expulsion of native populations with great violence. In the American West, not only was there great military resistance by the horse cultures of the Great Plains Indians; during the mid-nineteenth century, the Sioux acted as a great power equal to the United States in Great Plains diplomacy and warfare. As Richard White notes,

In a sense, the Fort Laramie Treaty marked the height of Sioux political power. … With the Sioux and their allies so thoroughly dominating the conference, the treaty itself amounted to both a recognition of Sioux power and an attempt to curb it. But when American negotiators tried to restrict the Sioux to an area north of the Platte, Black Hawk, an Oglala, protested that they held the lands to the south by the same right the Americans held their lands, the right of conquest: “These lands once belonged to the Kiowas and the Crows, but we whipped those nations out of them, and in this we did what the white men do when they want the lands of the Indians.”

The warfare between the northern plains tribes and the United States that followed the Fort Laramie Treaty of 1851 was not the armed resistance of a people driven to the wall by American expansion. In reality these wars arose from the clash of two expanding powers–the United States, and the Sioux and their allies. If, from a distance, it appears that the vast preponderance of strength rested with the whites, it should be remembered that the ability of the United States to bring this power to bear was limited. The series of defeats the Sioux inflicted on American troops during these years reveals how real the power of the Tetons was.

Sioux power, like that of the other Great Plains Indians, was based on the bountiful but precarious foundations of the horse trade and bison herds in the middle decades of the nineteenth century. The last of the bison herds were wiped by the locust of white hunters looking for hide in 1871-1875. But the decline of Sioux power was slow; they still managed to wage pitched battles against the US army into the last decade of the nineteenth century. So the expulsion of native populations was very far from an automatic process.

But even after native resistance was overcome, settlers had to clear the land. And so on … the point being that a whole lot more was ultimately involved in the transformation of British living standards that was not in place until the last quarter of the nineteenth century. Indeed, it only came together by the turn of the century. That’s why the hockey-stick is a story of the early twentieth century.

Standard
Markets

The British Refrigerated Meat Trade, 1880-1930

There were 8.2m city dwellers in Britain in 1850, dwarfing the 2.6m in the United States, and the 1.6m in Canada, Australia, New Zealand, Argentina, Ireland and Denmark combined. At the very peak of British self-confidence, when everything was going for Britain, the London carnivore was deeply unhappy. He had heard too much already about British innovation, about the so-called industrial revolution going on up north, and about the promised bounty of ghost acres. He just didn’t see it. What he really wanted was prime beef and the choicest lamb. No more animals could be fattened on British soil, even on imported grain. European lands were running out of surplus to ship to Britain on account of the growth of their own appetite. Denmark and Ireland were still reliable but both were as close to carrying capacity as the home counties. So … the ghost acres. The Londoner’s problem at mid-century was that livestock shipped 3000 miles from New York suffered significant erosion of quality and weight loss. Put bluntly, it was shit. It did even worse coming 16000 miles from the antipodes. Even the choicest cuts from imported livestock always sold at a significant negative premium against British prime. In any case, the settlement of the Anglo newlands had only just gotten underway.

Over the next half century, human and slaughter-animal (cattle, sheep and pigs) populations of Belich’s Anglo newlands (the American West, Canada, Australia, and New Zealand) would triple, cleared cropland there would quadruple, and pasture would expand by a factor of seven. But most of the meat bounty was destined not for the plate of our insatiable Londoner; urban populations in the Anglo newlands also expanded by a factor of seven (pointedly more in Oceania than Canada). New York in particular developed a voracious appetite for Mid-Western meat, which would soon leave little left over for the mother trade. Deliverance for our hungry Londoner would come in the form of chilled prime beef from Argentina. But I am getting ahead of the story.

Vitals of the British Meat Trade.
Urban population (million)
Britain US CAN AID
1850 8.2 2.6 0.4 1.1
1900 17.4 17.5 2.6 3.4
Population (million)
1850 27.2 23.6 3.2 9.5
1900 41.2 76.4 10.0 11.7
Pasture (thousands of square km)
1850 10.8 26.7 0.7 12.8
1900 14.2 138.3 53.2 33.9
Cattle (million)
1850 2.9 19.0 3.2 22.9
1900 7.5 59.7 13.7 30.0
Pigs (million)
1850 2.1 20.6 0.9 1.2
1900 2.9 51.1 3.4 3.4
Sheep (million)
1850 21.6 6.8 0.5 3.7
1900 21.5 7.6 20.5 4.1
Cropland (thousands of square km)
1850 5.1 23.8 6.3 3.8
1900 7.0 75.6 22.9 6.4
Source: Clio Infra. CAN=Canada, Australia and New Zealand, AID=Argentina, Ireland and Denmark.

Before the meat could be shipped, the slaughter animal had to be fattened. Above all this required the opening of Belich’s Anglo newlands to dense settlement. That in turn required tens of millions of migrants from the Anglo oldlands. But the land had not only to be cleared. Until midcentury, the American interior could not be densely settled on transport networks confined to water. Cincinnati’s water-borne pork hegemony was thus precarious.

Packing MidWest 1840s

It was rail that opened up the American interior to dense settlement. It was rail that created Chicago. It was rail that solved the concrete problem of feeding the Chicago-New York-London pipeline. This central feeder belt of the British meat trade in the 1880s was dependent on rapid transport further inland. The American railway system was financed by London bondholders. British bond finance was also critical in the Dominions proper, as indeed, Argentina. Some £20m of British capital was invested directly in the Argentine meat-packing companies.

But the population history and the rail network weren’t enough. Even if the capacity to produce that much meat is assured, the technical problem of mechanical refrigeration on transoceanic ships had to be mastered. Straightup freezing worked for mutton and lamb. So frozen New Zealand lamb was accepted as prime by our London carnivore when it arrived in the 1880s. After the turn of the century, New Zealand shipped more than one hundred thousand metric tons of frozen lamb every year to Great Britain. Argentinian and Australian lamb provided an additional one hundred thousand. New Zealand would ship an extraordinary one-fifty thousand in 1922. The years after the world war were marked by the violence of British bloodletting in a bid to return to the Gold Standard. But at least our hungry Londoner could score some prime New Zealand lamb. Even the lamb from Argentina and Australia could be top-notch.

British_lamb_trade.png

The unit on the Y axis is metric tons.

But beef did not take well to freezing. For as Perron (1971) explained:

Frozen meat is kept at a temperature of between 14ºF and 18ºF, but between the temperatures of 31ºF and 25ºF large ice crystals form between the muscle fibres of the meat and this process ruptures some of the small vessels of the flesh. When the meat is thawed this gives it a sweaty, discoloured appearance and it loses a certain amount of moisture, making it less juicy when cooked. This effect is more noticeable in large carcases like beef; having a greater bulk than mutton and lamb they take longer to pass through the critical range of temperature where the large ice crystals are formed and the damage done. But meat can also be chilled, that is, kept at a temperature of 30ºF which is just above its freezing point and this means that the ice crystals do not form in the carcase.

The chilling solution (obviously) was articulated by American meatpackings giants. The big four American meat-packers dominated the British chilled beef trade in the 1880s. Meanwhile, the British lamb trade was a definite Kiwi monopoly. The great sucking sound of the London market—London relied disproportionately among British cities on imported meat—had reoriented Belich’s Anglo newlands. The first big suppliers were Belich’s American northwest and New Zealand.

Northwest

There was a major epidemiological panic arising from the discovery of diseased frozen shipments at the turn of the century (that’s the crash in the graph for Beef imports in 1901). This would prove to be a hiccup in the real story: the rise of chilled beef from Argentina that would more than replace the Americans in the British beef trade. Argentina’s market position by the end of the decade outrivaled that of New Zealand’s in the British lamb trade. Of course, British lamb and mutton predominated in the national market. But by 1914, imported meat accounted for 40 percent of British consumption. More than any other great power in history, Great Britain came to rely on ghost acres for its meat.

MeatTrade.png

In the overall scheme of things, our hungry Londoner was finally satiated at the turn of the century when British imports of refrigerated beef, lamb and mutton stabilized in the ballpark of one billion pounds a year. In 1922, the British refrigerated meat trade as a whole peaked (at least locally) at more than a billion pounds (around half a million short tons).

British_meat_trade.png

In the 1890s, Argentina emerged as a major player in the British meat trade. Argentina was the solution to the problem posed by New York’s growing appetite for Chicago beef (increasingly joined by other American cities). In the 1900s, Argentina displaced the United States in the chilled beef trade and emerged as a near-peer of New Zealand in the lamb trade. Already by 1903, Argentina was supplying more refrigerated meat to Britain than any other nation.

British_meat_trade.png

Selected exporters only.

Britain’s refrigerated meat trade could survive the rise of the American carnivore and the reorientation of the American West to point to New York. But this was far from an automatic process. In the six principal suppliers besides the United States, during the second half of the nineteenth century, some 30 million additional people would help clear 270 thousand additional square kilometers of land for pasture and 89 thousand square kilometers more of cropland on three continents; allowing them to raise 30 million more sheep and 45 million more heads of cattle a year. A vast portion of world ecology was thus transformed to suit the taste of the British carnivore. Indeed, New Zealanders replaced their sheep with breeds more attractive to the London palate. Argentinians did the same with cattle. As did Australia and Canada; even old Ireland and Denmark had to keep adapting to Metropolitan tastes. Only the American West served the other pole of the Angloworld. Everyone else served London.

The British refrigerated meat trade began in 1875. By the turn of the century, the supply of meat to Britain had expanded and diversified well beyond the American North-West. It came into its own and lasted until well into the twentieth century. It was only in the 1950s that the British share of New Zealand lamb exports fell below 50 percent. The timing of the core phase of expansion of the refrigerated meat trade, 1880-1910, suggests that we must file this under the secondary industrial revolution. Britain’s ghost acres came to finally bear in the last quarter of the nineteenth century. The increased availability of prime meat may be directly responsible for the vanishing of the settler premium in Anglo-Saxon stature in the early twentieth century.

Stature (cm)
Britain US Canada Australia
1810 169.7 171.5
1820 169.1 172.2 171.5
1830 166.7 173.5 171.5
1840 166.5 172.2 170.4
1850 165.6 171.1 172.5 170.0
1860 166.6 170.6 172.0 170.6
1870 167.2 171.1 171.2 170.1
1880 168.0 169.5 171.2 171.1
1890 167.4 169.1 170.7 171.3
1900 169.4 170.0 169.9 172.3
1910 170.9 172.1 171.5 172.7
1920 171.0 173.1 173.0 172.8
1930 173.9 173.4 172.7
Source: Clio Infra.

This interpretation would be consistent with the evidence from life expectancy. British life expectancy was falling as late as 1870. And it is only after 1900 that it really picks up. No doubt indoor plumbing, penicillin, urban sanitation, and personal hygiene were all implicated in the transformation of everyday living standards recorded in stature and mortality data. But the growth in per capita meat consumption from 91 lbs in 1880 to 131 lbs in 1909-1913 definitely played its part. What this meant in practice was that compared to 1870 our London carnivore was eating meat twice as often on the eve of the world struggle. And not only was the quantity greater, the quality of chilled beef from Argentina and frozen lamb from New Zealand was finally up to the demanding standards of our discerning Londoner.

life_expectancy.png

 

 

Standard