Thinking

Who Gave Congress the Authority to Forbid Journals from Publishing the Work of Iranian Scholars?

United States Congress has imposed sanctions on Iran that include forbidding academic journals from publishing the work of Iranian scholars. The international scientific publishing group, Taylor and Francis, has rejected papers citing American sanctions as a reason. Indeed, after receiving an acceptance letter for his paper, an Iranian scientist received another communication reversing the decision from Taylor and Francis:

Thank you for your submission to Dynamical Systems.

As a result of our compliance with laws and regulations applied by the UK, US, European Union, and United Nations jurisdictions with respect to countries subject to trade restrictions, it is not possible for us to publish any manuscript authored by researchers based in a country subject to sanction (in this case Iran) in certain cases where restrictions are applied. Following internal sanctions process checks the above referenced manuscript has been identified as falling into this category. Therefore, due to mandatory compliance and regulation instructions, we regret that we unable to proceed with the processing of your paper.

We sincerely apologise for the inconvenience caused.

Should you wish to submit your work to another publication you are free to do so and we wish you every success.

Yours faithfully,

Justin

Sent on behalf of Dynamical Systems

Justin Robinson

Managing Editor | Taylor & Francis | Routledge Journals

Mathematics | Statistics | History of Science | Science, Technology & Society

Obviously this is shitty from the perspective of the scholar. But it is also very concerning if you believe, as I do, that the sanctions are inconsistent with the autonomy of our scholarly institutions. An open society doesn’t reproduce itself. The reenactment of an open society is premised on our willingness to defend elementary norms and practices of free enquiry.

Scholarly journals are expected to be double blind: The identity of the author and the referees are not known to each other. The purpose of this norm/institution/behavioral pattern is to make sure that the merits of the argument trump every other consideration. By discriminating on the basis of the birth lottery, the sanctions throw a wrench into this system in a manner that undermines their legitimacy and autonomy. This is a serious erosion of norms that underwrite free enquiry.

In short, Congress fucked up. Big time.

So what can we do about it?

Strategically speaking, I think that if a major scientific institution were to take up the cause, it could challenge the legitimacy, and indeed legality of such sanctions. Moreover, it has to be an American institution since the whole business is driven by Congressional foreign policy. I think the most appropriate such institution is the American Association for the Advancement of Science (AAAS). I have therefore launched an online campaign to petition AAAS to challenge such sanctions on the grounds mentioned.

Please consider signing the petition and sharing it with your network. This is kinda important. Here’s the link: https://www.change.org/p/american-association-for-the-advancement-of-science-get-aaas-to-challenge-us-sanctions-that-violate-basic-principles-of-free-enquiry. You should also ask your Congresswoman about it.

 

 

Advertisements
Standard
Thinking

The Recent Mutation Hypothesis of Modern Behavior is Wrong

Klein (1989, 1994, 1999, 2009), Stringer and Gamble (1993), Mellars (1996), Berwick and Chomsky (2016), Tattersall (2017), and others have speculated that a recent mutation related to neural circuitry was responsible for the emergence of modern behavior. (Although Gamble no longer subscribes to the theory.) The mutation is posited to be recent in the sense that it is assumed to have been under selection in African H. sapiens populations after our last common ancestor with other hominin; in particular Neanderthals and Denisovans. The phenotypic trait under selection is assumed to relate to the wiring of the brain since it cannot be identified morphologically. In what follows we shall argue that the recent mutation hypothesis cannot be reconciled with the available body of evidence.

Chomsky’s fundamental insight was that the language capacity was confined to our species and uniform across it. To wit, if you raise a Japanese child in Bangladesh, she will speak Bengali like a native. This implied that under all the massive synchronic (cross-sectional) and diachronic (temporal) variation in languages lies a simple structure for which Chomsky coined the term Universal Grammar. Building on that foundation and a deeper appreciation of the role of stochastic factors in evolution, Berwick and Chomsky (2016) argue that the posited mutation endowed the possessor with an innate capacity for generative grammar (2016, p. 87):

At some time in the very recent past, apparently some time before 80,000 years ago if we can judge from associated symbolic proxies, individuals in a small group of hominids in East Africa underwent a minor biological change that provided [the basic structure of generative grammar] … The innovation had obvious advantages and took over the small group. … In the course of these events, the human capacity took shape, yielding a good part of our “moral and intellectual nature,” in Wallace’s phrase. The outcomes appear to be highly diverse, but they have an essential unity, reflecting the fact that humans are in fundamental respects identical, just as the hypothetical extraterrestrial scientist we conjured up earlier might conclude that there is only one language with minor dialectal variations, primarily—perhaps entirely—in mode of externalization.

The precise mechanism posited by Berwick and Chomsky (2016) sounds plausible. But, as we shall show, it cannot be sustained in light of the entire body of evidence. Even if a mutation was responsible for the human capacity for language, we will argue, it cannot explain the emergence of modern behavior. The weight of the evidence suggests that, if such a mutation indeed occurred, it did so much earlier and was demonstrably shared by the Neanderthals and presumably other big-brained hominin during the Upper Pleistocene, 200-30 Ka.

There are three kinds of evidence of relevance to the question at hand: fossil remains, archaeology, and DNA. Briefly, fossil remains in the form of bone fragments, teeth, postcranial skeletons allow us to reconstruct the morphology of Pleistocene hominins. Hominin taxa (human “species”) are defined with reference to standard taxonomic rules on the basis of morphological criteria (more precisely, automorphies). The precise details need not concern us beyond the fact that hominin taxa, H. sapiensH. neanderthalensis and so on, are defined by morphological traits, not DNA.

The archaeological evidence consists of artifact assemblages that capture the material culture of Pleistocene hominin populations. Modern behavior, in as much as it can be operationalized, has to be pitted against this evidence. The clearest evidence of modern behavior in Pleistocene assemblages is symbolic storage—personal ornaments, decorated tools, engraved bones and stones, grave goods, formal use of space, and perhaps style in lithics. We’ll return to that evidence presently. But note that is a micro-local definition. Modern behavior can also be recognized in the overall dynamism suggested by rapid cultural turnover in assemblage artifacts. Indeed, the very idea of modern behavior comes from the contrast between later dynamism and the extraordinary monotony of Oldowan and Acheulean lithic assemblages during the first two million years after the rise of the genus Homo. Oldowan stone tools from c. 2.5 Ma show very little variation over a million years even as an adaptive radiation sends H. erectus to expand into Asia (‘Out of Africa I’). Virtually identical Acheulean handaxes were manufactured for another million, 1.5-0.5 Ma, throughout the western Old World. (It is speculated that bamboo tools replaced handaxes east of the Movius Line.)

Movious Line

All of this begins to change in the Upper Pleistocene, 200-30 Ka (we prefer the more neutral climatic label modified by the stratigraphic term ‘Upper’ for ‘Late’ which serves to clarify the archeological criteria being used to define the period). Lithic assemblages suddenly begin to display turnover in style and regional variation. The complexity of artifactual assemblages starts to accelerate. With the Upper Paleolithic in western Eurasia at the end of the Upper Pleistocene, we get the “full package” of modern behavior described by Diamond as “the Great Leap Forward.” It is this (eurocentric) body of evidence that suggests a decisive structural break in behavioral patterns that the neural mutation hypothesis is mobilized to explain. If true, the hypothesis would indeed explain this broad diachronic pattern in Pleistocene assemblages in the western Old World. But as we shall see, the hypothesis runs into insurmountable difficulties when broader diachronic and synchronic patterns are examined with a finer brush.

Great hopes were once placed on DNA. It was assumed that the problem of identifying the molecular basis of many if not most phenotypic traits would be solved sooner or later. All such hopes have since been dashed. The problem has turned out to be much more complex than previously imagined. What DNA has turned out to be good for is identification (DNA as molecular fingerprinting), uncovering phylogenetic relationships (who is closer to whom ancestrally), and importantly for us—with DNA recovered from ancient fossils—population history.

It is extremely difficult to make kosher inferences about Pleistocene history from the DNA of contemporary populations because of what Lahr (2016) calls ‘the Holocene Filter’ 15-0 Ka. Human population exploded a thousandfold from a few million c. 15 Ka to a few billion today. Genetic evolution accelerated to 100 times the rate that had prevailed prior to the Neolithic. And massive ‘population pulses’ (such as the massive migrations that followed the Neolithic c. 9 Ka and Secondary Products Revolution c. 5 Ka) transformed the population structure of macroregions, replacing ancient paleodemes of Pleistocene hunter-gatherers by mixed populations, with later arrivals as the predominant element (eg, the Neolithic and Yamnaya pulses in western Eurasia and the Bantu expansion in southern Africa). For all these reasons, modern populations cannot be identified with ancient populations; not locally, not regionally, and not globally. Pleistocene hominin populations, including anatomically modern humans, were very different from contemporary populations in both phenotypic traits and DNA. This means in particular that the frequency of modern-archaic admixture cannot be reliably ascertained from the DNA of living populations. We must rely instead on ancient DNA.

We cannot be sure if other taxa in the genus Homo had the capacity for generative grammar. But if we are to run a horse race in behavioral traits between “moderns” and “archaics”, we must have a level playing field. The period of extraordinary monotony in lithic assemblages is one of exponential (“hockey-stick”) encephalization. Aiello and Wheeler (1995) have shown that the brain and the gut are the heaviest consumers of metabolic energy; so that for brains to grow, guts had to shrink. This required a shift to higher quality foods; in particular, carnivory (first through scavenging and later through hunting; although true hunting did not emerge until much later). Moreover, it was accompanied by bigger bodies with greater metabolic demand. Bigger brains and bodies in turn required more complex (and need we say, successful) hunting and foraging behavior.

gut_brain_cycle_aiello_wheeler_1995.png

Source: Aiello and Wheeler (1995)

Dunbar (1998)’s social brain hypothesis tied the neocortex ratio of mammals to the size of their social network, suggesting that it was the computational demands of social life that selected for encephalization. At any rate, the powerful feedback loop generated runaway encephalization culminating in the Upper Pleistocene with big-brained (~1400 cc), big-bodied hominin like ourselves.

hominin_cc

Source: Spocter (2007). Note the log scale of the time axis; without it we have the familiar pattern of the hockey-stick.

So the genus Homo is too big for the purpose. It includes taxa with brains not much larger than chimps as well as taxa with brains bigger than ours. Clearly, we must compare H. sapiens with other encephalized hominin. We therefore introduce an Encephalization Filter that rules out all but the late archaic hominin. See next figure.

Screen Shot 2018-11-29 at 12.43.46 PM.png

What all this means is that in order to compare apples to apples, we must restrict the test universe to Upper Pleistocene Homo. Otherwise we might as well compare modern day New Yorkers to Neanderthals and conclude that the latter are nonmodern in their behavior (and, by implication, subhuman tout court). The acid test of the recent mutation hypothesis is whether we exhibited more modern behavior than the others in our period of overlap. In sum, the relevant test universe consists of clusters of paleo-demes (prehistoric geographically-situated populations) from multiple taxa in the genus Homo during the Upper Pleistocene, 200-30 Ka.

Ultimately, whether we call these taxa species or subspecies is irrelevant—they are defined by their automorphies. Still, Wolpoff and Caspari wonder if the Upper Pleistocene taxa now called “species” don’t satisfy the textbook definition of subspecies. Indeed Mayr has shown that the average splitting time for speciation is in millions of years. According to Henneberg (1995), the doyen of hominin taxonomy, ‘until thoroughly falsified, the Single Hominid Lineage Hypothesis provides less complicated description of hominid evolution … It is also compatible with the uniformitarian postulate — the recent human evolution occurred, and occurs, within one lineage consisting of widely dispersed but interacting populations.’ These frames are quite consistent with each other.

Within the single human lineage that is called the genus Homo there has always been a bushy tree with population structure. There is no reason to believe that paleo-demes were completely genetically isolated from each other. Demes exchange genes at the margins of ecoregions which is how gene flow is maintained within a geographically dispersed species. The point is that under Pleistocene levels of population density and the attendant isolation-by-distance, there obtained intercontinental population structure (ie continental races or discrete subspecific variants) of the sort that has no modern counterpart. Simply put, continental populations during the Upper Pleistocene were considerably further away from each other in neutral phenotypic and genotypic distance than they are today or were in the ethnographic present. Wolpoff and Caspari lost their big battle over the model for the ethnographic present in favor of the ‘(Mostly) Out of Africa’ model. But their multiregional model works nicely for Upper Pleistocene population structure. As Zilhão notes in the European context, ‘it is highly unlikely that the Neandertal-sapiens split involved differentiation at the biospecies level.’

Hominin taxa now called “species” are better conceptualized as geographic clusters of paleo-demes situated in macroregions under relatively severe isolation and therefore derivation; maybe not enough derivation for speciation but perhaps enough for raciation (400-800 Ka, Cf. the San splitting time from the rest of the extant human race ~280-160 Ka).

Reich

Source: Reich (2018)

During the Upper Pleistocene, Neanderthals were endemic in Europe and the steppe as far as Altai; extending their occupation for extended periods to southwest Asia. Denisovans are thought to be endemic to steppe and Sunda. A hominin population descended from H. erectus was endemic in Asia and Sunda. Further out, H. floresiensis (“the Hobbit”) survived in insular isolation on Flores. Reich reports that there was at least one other “ghost population” in Eurasia whose fossils have yet to be identified but whose genetic signal is evident in ancient DNA from admixture. H. sapiens was endemic in Africa where it shared the continent with yet other ghost populations. (On population structure in prehistoric Africa see Scerri et al. 2018).

So what is the evidence for modern behavior for these taxa during the Upper Pleistocene? So far we only have reliable archaeological evidence for Neanderthals and anatomically modern humans.

Gamble (2013) characterizes the Neanderthal dispersal in western Eurasia as ‘an adaptive radiation based on projectile technology and enhanced carnivory with prime-age prey the target’ c. 300-200 Ka. Not only did Neanderthals invent hafting, Upper Pleistocene assemblages associated with Neanderthals exhibit disproportionately higher frequencies of prime age prey compared to anatomically modern humans, suggesting that they were more competent hunters. In the same assemblages we find the first evidence of managed fire. In a separate analysis, Zilhão (2007) notes that

Chemical analysis of two fragments of birch bark pitch used for the hafting of stone knives and directly dated to >44 ka 14C BP showed that the pitch had been produced through a several-hour-long smoldering process requiring a strict manufacture protocol, i.e., under exclusion of oxygen and at tightly controlled temperatures (between 340 and 400ºC) (Koller et al., 2001). The Königsaue pitch is the first artificial raw material in the history of humankind, and this unique example of Pleistocene high-tech clearly could not have been developed, transmitted, and maintained in the absence of abstract thinking and language as we know them; it certainly requires the enhanced working memory whose acquisition, according to Coolidge and Wynn (2005), is the hallmark of modern cognition.

Gowlett (2010) suggest that Neanderthal hearths were a game changer. Not only did fire act as an external stomach by breaking down enzymes in meat thus reducing gut loads, it also extended the day for social interactions. More compelling evidence on Neanderthal social life comes from ‘super sites’ recognized from assemblages with massive deposits of debris accumulated over very long periods of time. ‘They marked,’ Gamble (2013) notes,

… a shift in hominin imagination from conceiving the world in a horizontal manner to stacking it vertically – a representation of accumulated time. … Once established, the super-site niche set the stage for thinking differently about places. They now formed part of the hominins’ distributed cognition.

The supersites suggest that Neanderthals had indeed ‘discovered society’; surely an important consideration in the Neanderthal question—whether they were biologically capable of modern behavior. Surely if they had complex social lives it becomes hard to classify them as subhuman.

However, the litmus test for artifact assemblages is evidence of symbolic storage. Shea (2011) seconds João Zilhão and Francesco d’Errico that ‘finds of mineral pigments, perforated beads, burials and artifact-style variation associated with Neanderthals challenge the hypothesis that symbol use, or anything else for that matter, was responsible for a quality of behavioral modernity unique to Homo sapiens.’ Caspari et al. (2017) note that,

Neanderthals were capable of complex behaviors reflecting symbolic thought, including the use of pigments, jewelry, and feathers and raptor claws as ornaments. All of this is probably evidence of symbolic social signaling, complementing the evidence of complexity of thought demonstrated by the multi-stage production of Levallois tools. Neanderthals were human.

But problems for the recent mutation hypothesis don’t end with behaviorally modern Neanderthals. Shea (2011) explains how any joint examination of the diachronic and synchronic pattern of the actual behavior of anatomically modern humans as they dispersed from Africa undermines the recency hypothesis:

Evidence is clear that early [anatomically modern] humans dispersed out of Africa to southern Asia before 40,000 years ago. Similar modern-looking human fossils found in the Skhul and Qafzeh caves in Israel date to 80,000 to 120,000 years ago. Homo sapiens fossils dating to 100,000 years ago have been recovered from Zhiren Cave in China. In Australia, evidence for a human presence dates to at least 42,000 years ago. Nothing like a human revolution precedes Homo sapiens’ first appearances in any of these regions. And all these Homo sapiens fossils were found with either Lower or Middle Paleolithic stone tool industries.

Shea (2011) appreciates the evidence from Howiesons Poort c. 90-70 Ka and other sites of early ‘efflorescences’. But that deepens the paradox. For

… [if] behavioral modernity were both a derived condition and a landmark development in the course of human history, one would hardly expect it to disappear for prolonged periods in our species’ evolutionary history.

Habgood (2007) examined the evidence for modern behavior in Pleistocene Sahul and concludes:

The proposal by McBrearty and Brooks (2000) and Mellars (2006) that the complete package of modern human behaviour was exported from Africa to other regions of the Old World and ultimately into Greater Australia between 40–60ka BP is not supported by the late Pleistocene archaeological record from Sahul….

O’Connell and Allen (2007) put things more bluntly. Sahul’s archaeological record, in their opinion, ‘does not appear to be the product of modern human behaviour as such products are conventionally defined.’ Equally sensitive observations may be made for southern Africa, southern Eurasia, eastern Eurasia, or Sunda, where modernity is not evident until 30-10 Ka, ie tens of thousands of years after the arrival of “moderns” (if not later).

The greatest problem with the recent mutation hypothesis might very well be that it necessarily implies a modern-nonmodern dichotomy since you either have the trait for generative grammar or you don’t. In that sense it is very much like Krugman’s It theory of global polarization. This dichotomy pronounces not only other archaic hominin but also many anatomically modern humans to be nonmodern and hence not fully human well into the ethnographic present. Indeed, it has proven exceedingly difficult to quarantine Neanderthals from anatomically modern humans for there is no hyperplane in the space of behavioral traits that includes both hunter-gatherer societies in the ethnographic present and excludes Neanderthals. As Zilhão (2011) notes,

Many have attempted to define a specifically “modern human behavior” as opposed to a specifically “Neandertal behavior,” and all have met with a similar result: No such definition exists that does not end up defining some modern humans as behaviorally Neandertal and some Neandertal groups as behaviorally modern. [Emphasis added.]

In a comment on Henshilwood and Marean (2003), Zilhão captures the essence of the conundrum induced by the implied dichotomy:

… the real problem is that (1) archeologically visible behavioral criteria designed to include under the umbrella of “modernity” all human societies of the historical and ethnographic present are also shared by some societies of anatomically nonmodern people and (2) archeologically visible behavioral criteria designed to exclude from “modernity” all known societies of anatomically nonmodern people also exclude some societies of the historical and ethnographic present.

We can think of this as “merely” a political issue and tell ourselves that we should “just do the science; let the chips fall where they may.” Or we may ask ourselves if such essentialist schema are doing as much explanatory work as they are supposed to.

At this point we can anticipate the following rebuttal:

Why is it such a big deal if the efflorescences get extinguished or paleo-demes fail to exhibit modern behavior for tens of thousands of years after the posited mutation? Since the capacity for generative grammar is uniform across the species, no denial of modernity is implied since any evidence of modern behavior by one paleo-deme in the species is enough to confirm the potentiality for modern behavior for all demes in the species. These may appear in a staggered manner. But that can be easily explained by adding that the capacity for generative language may only be a necessary condition for modern behavior and that other things would also have to fall into place before any ‘full flowering’ obtains.

This is a fair response. And we speculate that something like this was indeed the case. But notice that in light of the evidence relayed above, the “one-strike-in” rule for taxa immediately implies that Neanderthals were behaviorally modern as well. This is indeed what we have been arguing all along. And if that is the case then the mutation had to have occurred way, way earlier—at least hundreds of thousands of years before the 80 Ka posited by Berwick and Chomsky (2016). This interpretation is in line with the conclusions offered in a recent volume edited by Brantingham et al. (2004) published under the title The Early Upper Paleolithic beyond Western Europe:

If there is a common evolutionary cause, phylogentic [sic] or otherwise, it is rooted much deeper in evolutionary time and is largely independent of the events tracked in the Middle-Upper Paleolithic transition. [Emphasis added.]

Zilhão (2007) concurs with the last assessment offering 400 Ka as a plausible date of arrival of the biological wetware for modern behavior. But if that is true, then it raises the daunting question of why said mutation did not generate an adaptive radiation. Perhaps it corresponds to the adaptive radiation associated with the dispersal of H. heidelbergensis 600-400 Ka (‘Out of Africa II’) that Gamble (2013) ties to mastery of controlled fire. But that possibility is unlikely to satisfy those who seek to explain ‘the revolution that wasn’t’ (McBrearty and Brooks 2000; the reference is to “the Human Revolution”) for a genetic mutation c. ~400 Ka cannot then explain the adaptive radiation evident c. ~50 Ka.

Bar-Yosef (2002) suggests that a better analogy for modern behavior would be the Neolithic Revolution, with its staggered appearance. Indeed, what is manifest is that global polarization made its first appearance not in the Neolithic but during the Upper Pleistocene. The diachronic and synchronic patterns revealed by archaeology are consistent with the framing of modern behavior during the Pleistocene as an emergent form of sociocultural complexity that makes a staggered appearance and therefore necessarily generates global polarization.  It was the First Great Divergence whose roots may be sought perhaps in biogeography.

ET

Source: Gamble (2013)

The weight of biogeography is well captured by Bailey (1960)’s Effective Temperature (ET) that measures both the basic thermal parameter of the ecoregion and the length of the growing season. Binford (2001) shows how ET structures the lifeworld of hunter-gatherers; opening some doors while closing others; rigging the dice in favor of some and against others. For as Gamble (2013) notes mercilessly, ‘the low latitudes were good areas to disperse away from …’. Proctor (2003) acidly calls this discourse “Out of Africa! Thank God!” But is there any doubt that escaping the tyranny of the isotherms was an unambiguous advantage? as it has been ever since? Our position is that the antiracist suspicion of biogeography is completely mistaken. Biogeography is not racist but rather an alternative to racialism as an explanatory schema for global polarization.

What of the fate of the Neanderthals? The weight of the evidence suggests that they were not wiped out but rather absorbed into the much larger incoming populations of anatomically modern humans. They may have been as dynamic as H. sapiens. But there was a decisive difference. Our race had much longer life histories. This suggests that the adaptive radiation witnessed c. ~50 Ka may be related to runaway K-selection in anatomically modern humans. With grandparents around to pass on know-how and know-where the fidelity of intergenerational transmission of acquired innovations must have been higher. And larger populations could be expected to generate more innovations. It may be demographic and life history variables that explain the diachronic and synchronic patterns of our deep history.

Caspari.png

Source: Caspari and Wolpoff.

 

 

 

 

Standard
Thinking

Population History, Climatic Adaptation, and Cranial Morphology

Physical anthropology in general and craniology in particular have quite a sordid history. The size of the skull was of great interest to scientific racialists, who seized on minor differences in averages as evidence of differential capacity of the “races” for civilization. As we have shown before and we shall see again in what follows, race gives us a very poor handle on cranial morphology, and human morphology more generally. This does not mean that there is little systematic variation; that all variation in between individuals within populations. To the contrary, human morphology is geographically patterned. Situated local populations or demes differ systematically and significantly from each other, generating smooth geographic clines. In particular, as the next figure shows, our species obeys Bergmann’s rule—bigger variants are found in colder climes.  

Ruff(1994)

Source: Ruff (1994)

Of course, skull size and body size scale together. We can read this off the US military anthropometric database. So we should expect the bigger people of colder climes to have bigger skulls as well. Indeed, if Ruff’s thermoregulatory theory is right, particular features of skull morphology should be adapted to the macroclimate even more than the postcranial skeleton (the bit below the head).  

It is easy enough to recover the monotonic relationship between climatic variables and cranial measurements. But there is a very serious problem. Suppose that we can rule out nutritional and other environmental influence, say because we know the parameter is very slow-moving. That does not mean that all systematic variation in that parameter is then due to the bioclimate. For it may instead be due to random drift. Indeed, we know that founder effects, isolation-by-distance, and genetic drift can generate geographic clines that confound the bioclimatic signal. There is mounting evidence that human morphology, just like the human genome and linguistic diversity, contains a strong population history signal. 

Population history signals in DNA, RBC polymorphisms, and craniometrics are congruent.

Segments of DNA under selection do not preserve the signal from population history well; the bit not under selection—”junk DNA”—does. Similarly, information on population history can be recovered from traits of cranial morphology that are neutral, ie not under selection. Scholars have used genetic distance to control for population history in order to recover information on bioclimatic adaptation from morphology. In what follows, I will show how we can use cranial morphology itself to the same effect.

Howells Craniometric Dataset contains 82 linear measurements 2,524 skulls from 30 populations located in five continents. We will be working exclusively with that data. Let us begin with skull size. The five continental “races” that Wade thinks are real—one each for Europe, Africa, Asia, Australia, and the Americas—explain 6 percent of the variation in skull size. Moreover, not even one of the mean differences between said “races” is statistically significant. See Table 1. 

Table 1. Mean cranial capacity by continental “race”.
ContinentMale meantStatStd
Europe1,3530.27573
Pacific1,3480.22097
Africa1,281-0.45895
Asia1,3340.074101
America1,322-0.04591
Source: Howells Craniometric Dataset. Estimates in Italics are insignificantly different from the global mean at the 5 percent level. 

While demes are lumpy, they can’t be called races; there are tens of thousands of them. And though races are useless fictions, demes or situated local populations are very useful fictions. Put another way, demes are to anthropology what particles are to physicists and representative rational agents are to economists. Recall the tyranny of distance over land before the late-nineteenth century. Under such conditions as prevailed well into the ethnographic present, populations were situated locally and isolated from nearby and far away demes; the latter more than the former and smoothly so. Dummies for the demes alone explain 44 percent of the variation. What needs to be explained is this systematic component. 

In order to understand variation in skull size we first note that Binford’s Effective Temperature (ET, estimated as a linear function of latitude) is a strong correlate of skull size (r=-0.547 for men, r=-0.472 for women) and orbital size, ie the size of the eye socket (r=-0.464 for men, r=-0.522 for women). See the top panel of next figure. Moreover, as you can see the bottom-left graph, orbital size is a strong predictor of skull size (r=0.506); raising an intriguing pathway of selective pressure that ties cranial variation not to thermal parameters but to variation in light conditions. Still, the bottom-right graph shows that ET is correlated with skull size even after controlling for orbital size. 

If we consider only the systematic component, orbital size alone explains half the variation in skull size. So we should try to understand variation in this mediating variable as well. 

A straightforward way to extract the population history signal is to isolate morphological parameters that are uncorrelated with climatic variables and use these to construct a neutral phenotypic distance measure. We identity 14 linear measurements in the dataset that are uncorrelated or very weakly correlated with ET. Assuming that these are neutral traits not under selection, we use them to compute phenotypic distance from the San. (We use Pearson’s correlation coefficient between standardized 14-vectors for the San and each of the 30 populations as our measure of phenotypic distance.) Basically we are using the fact that the San are known to have been the first to diverge from the rest of us so that the degree of correlation in these neutral measures contains information on the genetic distance between the two populations. We also know that genetic distance is proportional to geographic distance from sub-Saharan Africa due to our specific population history. So if our measure is capturing population history, it should be correlated with geographic distance. The next figure shows that this is indeed the case. 

If we are right about our reasoning, we are now in a position to decompose morphological variation into neutral, bioclimatic and non-systematic factors. We begin with our estimates of pairwise correlation between our neutral factor (phenotypic distance from San) and ET on the one hand and selected morphological variables on the other. 

Table 2. Spearman’s correlation coefficients.
  Skull sizeOrbital sizeCranial IndexNasal Index
MenNeutral-0.488-0.5750.2150.257
ET-0.547-0.464-0.0610.583
WomenNeutral-0.437-0.5700.1170.301
ET-0.472-0.522-0.0250.543
Source: Howells Craniometric Dataset. Estimates in Bold are significant at the 5 percent level. 

We see that Cranial Index (head breadth/head length) is uncorrelated with both ET and Neutral, while orbital size and skull size are strongly correlated with both. Interestingly, the Nasal Index (nasal breadth/nasal length) is a strong correlate of ET but not our neutral factor, implying that nasal morphology contains a strong bioclimatic signal and a weak population history signal. These results are only suggestive however. In order to nail down the bioclimatic signal we must control for population history and vice-versa. 

Table 3. Spearman’s partial correlation coefficients.
  Skull sizeOrbital sizeCranial IndexNasal Index
Neutral controlling for ETMen-0.533-0.6090.2210.260
Women-0.488-0.6590.1180.348
ET controlling for neutralMen-0.584-0.513-0.0800.584
Women-0.517-0.625-0.0270.564
Source: Howells Craniometric Dataset. Estimates in Bold are significant at the 5 percent level.

We see that both population history and bioclimatic signals are present in skull size and orbital size; neither is present in the Cranial Index; and only the bioclimatic signal is present in the Nasal Index. This is consistent with known results in the field. Not just the Nasal Index but a bunch of other traits in facial morphology exhibit a strong bioclimatic signal, suggesting strong selective pressure on the only part of the human body exposed to the elements even in winter gear and even in the circumpolar region. 

Table 4 shows the percentage of variation explained by phenotypic distance from the San and ET. We see that Neutral and ET explain roughly 11 percent of the variation in skull size and orbital size each; neither explains CI; and ET explains 15.6 percent of the variation in the Nasal Index. More than three-fourths of the variation in these variables in not explained by either. 

Table 4. Apportionment of individual craniometric variation.
 Skull sizeOrbital sizeCranial IndexNasal Index
Neutral11.1%11.7%1.5%4.5%
ET11.8%10.2%1.2%15.6%
Error77.1%78.1%97.3%80.0%
Source: Howells Craniometric Dataset. OLS-ANOVA estimates after controlling for sex. 

Table 5 displays the portion of systematic (interdeme or interpopulation) variation explained by population history and ET. Interestingly, the population history signal is stronger than the bioclimatic signal for systematic variation in skull size and especially orbital size. Neutral phenotypic distance from the San, our population history variable, explains 35 percent of the systematic variation in orbital size and 28 percent in skull size. ET explains 22 percent in both. Population history and ET explain more than half the systematic variation in both size variables. ET explains 27 percent of the systematic variation in the Nasal Index likely reflecting morphological adaptation to the macroclimate. The less said about the Cranial Index the better. 

Table 5. Apportionment of systematic craniometric variation.
 Skull sizeOrbital sizeCranial IndexNasal Index
ET24.0%22.1%1.5%28.5%
Neutral26.9%36.8%4.2%10.0%
Error49.1%41.1%94.4%61.5%
Source: Howells Craniometric Dataset. OLS estimates adjusting for sex-ratio.

Remarkably, both ET and population history’s average share is more than a sixth but shy of a fifth, adding up to 37 percent of systematic variation of all four variables. If we drop the Cranial Index and average the other three morphological variables, they explain 24 percent of the variation each. In the horse race between population history and ET, we have a draw. The balance is more uneven in cranial size variables where population history has the upper hand. What happens if we introduce race dummies? 

Table 6. Apportionment of systematic craniometric variation.
 Skull sizeOrbital sizeCranial IndexNasal Index
ET23.8%31.2%1.1%16.3%
Neutral24.4%19.5%1.2%5.1%
Europe dummy4.1%0.0%0.7%7.0%
Asia dummy0.0%0.0%16.5%6.1%
America dummy0.0%3.4%0.2%3.9%
Pacific dummy0.3%0.9%0.9%0.5%
Error47.4%45.0%79.5%61.3%
Source: Howells Craniometric Dataset. Estimates in bold are significant and those in italics are insignificant at the 5 percent level. OLS estimates adjusting for sex-ratio.

We see that race is pretty much a useless fiction. It gives us no handle at all on craniometric variation. The best we can say is that Asian heads are more globular. Interestingly, ET and population history exchange rankings in explaining orbital size after controlling for race. But the overall picture is unchanged. 

In order to be sure than we are not picking up spurious correlations, we fit linear mixed-effects models. We allow for random-effects by deme and admit fixed-effects for sex and race. We report the number of continental race dummies (out of four) that are significant in each regression. 

Table 6. Linear mixed-effects model estimates.
 Skull sizeOrbital sizeCranial IndexNasal Index
InterceptYesYesYesYes
Sex dummyYesYesYesYes
Deme random effectYesYesYesYes
Neutral-7.153-1.2080.9722.620
ET-0.895-0.209-0.1550.590
Number of race dummies significant0012
Source: Howells Craniometric Dataset. Estimates in bold are significant at the 5 percent level.

Our main results are robust to the inclusion of random effects for demes. The Cranial Index is bunk. The Nasal Index contains a strong bioclimatic signal but an insignificant population history signal. The gradients of ET and phenotypic distance from the San are significant for skull size and orbital size. Note that the sex dummy is always significant due to dimorphism—the dimorphism index for skull size in the dataset is around 1.15. But race dummies are rarely significant. Indeed, of the 16 dummies for race in the above regressions only 3 were significant. And these had mostly to do with “the wrong latitude problem”: New World population morphology can be expected to be adapted to the paleoclimate of Siberia so that it is not surprising that the coefficient parameter of the dummy would absorb that systematic error. 

The results presented above are congruent with known results from dental, cranial, and postcranial morphology. The basic picture that is emerging suggests that some skeletal traits are developmentally-plastic so that they reflect health status (eg stature, femur length); some are selectively neutral (eg temporal bone, basicranium, molars) so that they can be used to track population history; and finally, some have been under selection and likely reflect bioclimatic adaptation (eg, nasal shape, orbital size, skull size, pelvic bone width). 

In the 1990s and the early 2000s there was a sort of panic in physical anthropology related to genetics. The genomic revolution threatened to put people out of business. But it has become increasingly clear that the genomic revolution has turned out to be a dud. Most efforts to tie phenotypic variation to genomic variation have failed utterly. So far the best use of DNA for understanding human variation has turned out to be just a fancy version fingerprinting. So if you have ancient DNA samples, you can track population history. It has since been shown that morphological variation itself can be used to track population history just as effectively as DNA markers. With the advent of new techniques such as geometric morphometrics, the resurgence of interest in understanding morphological variation, and the manifest failure of DNA as the key to understanding variation in human morphology, we are truly in the midst of an unannounced golden age in physical anthropology. 


In lieu of references: See the splendid work by, among others, Brace (1980), Beals (1983, 1984), Ruff (1994), Relethford (2004, 2010, 2017), Roseman (2004), Harvati and Waever (2006), von Cramon-Taubadel (2014), and Betti et al. (2010). 

 

Standard
Thinking

When Was the Industrial Revolution?

Metrics of everyday living standards are problematic. Commonly used economic statistics like real median income, real median household consumption, real per capita income et cetera rely on fallible national economic statistics. Above all, National Income Accounting may be blind to integral aspects of the standard of living. Accounts may be fudged by governments in countries with weak independent institutions. Finally, such statistics rely on judgements encoded in adjustments for representative consumption bundles, purchasing power and effective exchange rates. Of course, the entire enterprise relies quite heavily on assumptions about the plausibility of reducing human well-being to consumption bundles.

Anthropometric alternatives such as stature and BMI are confounded by morphological adaptation to the paleoclimate. Bigger bodies generate more heat so that situated populations adapted to warmer climes tend to be smaller than those adapted to colder climes in accordance with Bergmann’s rule. This means that the cross-sectional variation of stature and BMI cannot be interpreted straightforwardly as reflecting differences in everyday living standards. However, time-variation in anthropometric measures (and the cross-section of dynamic quantities) can be usefully interpreted as measuring changes in living standards. To wit, the Dutch-Indian difference in contemporary stature is less reliable than the Dutch-Indian difference in gains in stature (say, over the past century).

Actuarial alternatives are more promising. Mortality and morbidity data capture health insults that are directly indicative of net nutritional status. Since the latter is an irreducibly joint function of disease environment and nutritional intake, it goes to the heart of everyday living standards. Actuarial alternatives such as life expectancy are not confounded by adaptation to the paleoclimate since there is no equivalent of Bergmann’s rule for life history variables. Instead variables such as life expectancy capture contemporaneous environmental burdens—epidemiological and thermal—that are indeed of interest to those investigating variation in living standards.

Table 1a. Effective Temperature and Living Standards.
Effective Temperature PCGDP Stature (cm) Life Expectancy
ET < 14 23,537 174 76
14 < ET < 16 12,526 171 73
ET > 16 8,439 167 68
Source: Clio Infra, Binford (2001), author’s computations. Population-weighted means for N=99 countries. 

The above differences in the variables explain why Stature (r=-0.736, p<0.001) is a stronger correlate of Effective Temperature (ET) than Life Expectancy (r=-0.360, p<0.001) and PCGDP (r=-0.378, p<0.001). It also explains why controlling for income, ET is uncorrelated with Life Expectancy (t-Stat=-1.5) but not stature (t-Stat=-8.0). Whatever causal effect ET has on Life Expectancy is explained by variation in per capita income. This is not true of stature presumably because ET is correlated with variation in the paleoclimate which is causally related to stature and other body size variables via Bergmann’s rule.

Screen Shot 2018-11-10 at 12.50.06 AM.png

Parenthetically, we note that if we use Binford’s thresholds for storage (ET=15.25) and terrestrial plant dependence (ET=12.75), then we obtain a version of Table 1a that is less effective at partitioning modern societies by living standards. See Table 2b below. The map above displays Binford’s thresholds.

Table 2b. Effective Temperature and Living Standards.
Effective Temperature PCGDP Stature (cm) Life Expectancy
ET < 12.75 22,012 174 75
12.75 < ET < 15.25 24,164 174 78
ET > 15.25 8,781 167 68
Source: Clio Infra, Binford (2001), author’s computations. 

ET is a linear function of absolute latitude (r=-0.944, p<0.001). ET is meant to capture the basic thermal parameter of the macroclimate. Together temperature, precipitation and topography (elevation, terrain, soil, drainage) structure the ecology of situated populations in the ethnographic present just as they did in prehistory. Economic history, prehistory, and anthropology are not as far from each other as they seem. But we have digressed far enough. Let us return to living standards in Britain.

precipitation_g.jpg

If you accept my argument that life expectancy is the best measure of everyday living standards we have, then the transformation of British living standards can be dated quite precisely. The essence of the Malthusian Trap was that real gains in living standards could not be sustained. Given the energetic constraints of preindustrial economies, population growth wiped them out. Thus we find that forty was a sort of rough upper bound on British life expectancy under the Malthusian Trap. The British Industrial Revolution, 1760-1830, had no discernable impact on British Life Expectancy. It is only in 1870 that British life expectancy begins to pull away from forty. Fifty was only breached in 1907; sixty in 1930; seventy in 1950; and eighty in 2000. Britons could expect to live twice as long at the end of the 20th century as in 1870 or 1550. 20 of the 40 years in life expectancy gained over the past 150 years were gained in the 40 year period 1910-1950; 10 have been gained in the 68 years since 1950; and 10 were gained in the first 40 years of the secondary revolution, 1870-1910. 1910-1950 is the hockey-stick that takes you from the turn of the century classical to the mid-century modern.

GreatDivergence

The evidence from stature is also consistent with this periodization. The problem with using body size variables like stature is that, unlike life expectancy, we don’t have a Malthusian ballpark against which to judge modern morphology. As I explained, European body size over the very long run is explained by population history. European gracialization (shrinking bodies) and decephalization (shrinking brains) since the medieval period is an active area of investigation, although still poorly understood.

body_sizeHowever, time-variation of stature in the ethnographic present can be interpreted as measuring time-variation in everyday living standards. That is all we really need to date the departure. And that too points to the last quarter of the nineteenth century as the beginning of the divergence. Most of the gains in stature were concentrated in the period 1920-1960, corroborating the finding from British life expectancy. The hockey-stick is a story of the early-twentieth century.

Stature

The empirical evidence from both anthropometric and actuarial metrics suggest that it is time to cut the British Industrial Revolution down to size. It is time to recognize it for what it was: a “revolution” largely confined to cotton textile manufacturing that pointedly failed to transform everyday living standards in Britain. The real departure came with the secondary industrial revolution, 1870-1970, that was not confined to Britain but a rather transatlantic affair. It witnessed the generalized application of machinery powered by fossil fuels to perform work everywhere from farms to factories. More generally, it is characterized above all by the increasingly ubiquitous application of science and technology to concrete problems.

But there was much more at play than technology and knowhow. For it involved a massive integration of the globe that, as Geyer and Bright put it, destroyed the capacity of the world’s macroregions to sustain autonomous histories. This onset of their ‘global condition’ takes places in the middle decades of the nineteenth century. The key to this transformation was rail. Sail was competitive with steam on the open ocean through the nineteenth century. The topology of the world economy thus couldn’t have been transformed by cheap and efficient transport by steam ships because sailing ships were already cheap and efficient.

The disconnectedness of the world economy was not a function of weak connections between macroregions. Instead it was local; defined by the tyranny of distance in the interiors of the great landmasses of worlds old and new. Until the advent of rail, transport over land was prohibitively expensive; condemning lands far from waterways to insulation. The sea-borne world economy was correspondingly limited to the maritime world. A larger, more integrated and more intrusive world economy emerged with rail that allowed the bounty of the interior to be sold on the world market. The international division of labor that emerged on this iron frame had much more bite than the one that characterized the world economy confined to the maritime world.

Ghost acres had little bearing on the British kitchen table until the late-nineteenth century. To be sure, Britons had been addicted to imported drug foods (sugar, tea, coffee, tobacco) from slave plantations for centuries. But as late as 1870, only 10 percent of British meat was imported. By 1910, Britain was importing 40 percent; largely beef from Argentina and lamb from New Zealand. The ghost acres finally increased the proportion of high quality foods in the British diet. Recall that beef is extraordinarily land-intensive. In the present day US, according to a recent study, producing one calorie (Mcal=1000 kcals) of beef requires 147 square meters of land compared to just 5 square meters for chicken and pork. Since land productivity was considerably lower than today, beef must have been even more land-intensive that it is today. The ghost acres were thus absolutely necessary for the transformation of British diets and therefore British living standards.

So Pomeranz is right about the ghost acres but wrong about the timing. Ghost acres did not transform British diets until the last quarter of the nineteenth century. As I suggested in the great British meat trade, the transformation of British living standards required not only the opening of the American interior but also an instance of definite technical solutions that make up the secondary/real industrial revolution: in this case solving the problem of transoceanic mechanical refrigeration. Chicago could not monopolize the British beef trade in 1880s and in the 1900s Argentina could not replace the US as a supplier in the British beef trade without the chilling solution. So I am not saying that rail was sufficient. What I am saying is that rail was necessary. Moreover, the British beef trade was ultimately based on the harvesting of great pastures in the interior of the New World. This required rail not only in the Anglo newlands but also in Argentina.

The opening of the interiors also required great migrations from the two Anglo oldlands. It also required the expulsion of native populations with great violence. In the American West, not only was there great military resistance by the horse cultures of the Great Plains Indians; during the mid-nineteenth century, the Sioux acted as a great power equal to the United States in Great Plains diplomacy and warfare. As Richard White notes,

In a sense, the Fort Laramie Treaty marked the height of Sioux political power. … With the Sioux and their allies so thoroughly dominating the conference, the treaty itself amounted to both a recognition of Sioux power and an attempt to curb it. But when American negotiators tried to restrict the Sioux to an area north of the Platte, Black Hawk, an Oglala, protested that they held the lands to the south by the same right the Americans held their lands, the right of conquest: “These lands once belonged to the Kiowas and the Crows, but we whipped those nations out of them, and in this we did what the white men do when they want the lands of the Indians.”

The warfare between the northern plains tribes and the United States that followed the Fort Laramie Treaty of 1851 was not the armed resistance of a people driven to the wall by American expansion. In reality these wars arose from the clash of two expanding powers–the United States, and the Sioux and their allies. If, from a distance, it appears that the vast preponderance of strength rested with the whites, it should be remembered that the ability of the United States to bring this power to bear was limited. The series of defeats the Sioux inflicted on American troops during these years reveals how real the power of the Tetons was.

Sioux power, like that of the other Great Plains Indians, was based on the bountiful but precarious foundations of the horse trade and bison herds in the middle decades of the nineteenth century. The last of the bison herds were wiped by the locust of white hunters looking for hide in 1871-1875. But the decline of Sioux power was slow; they still managed to wage pitched battles against the US army into the last decade of the nineteenth century. So the expulsion of native populations was very far from an automatic process.

But even after native resistance was overcome, settlers had to clear the land. And so on … the point being that a whole lot more was ultimately involved in the transformation of British living standards that was not in place until the last quarter of the nineteenth century. Indeed, it only came together by the turn of the century. That’s why the hockey-stick is a story of the early twentieth century.

Standard
Markets

The British Refrigerated Meat Trade, 1880-1930

There were 8.2m city dwellers in Britain in 1850, dwarfing the 2.6m in the United States, and the 1.6m in Canada, Australia, New Zealand, Argentina, Ireland and Denmark combined. At the very peak of British self-confidence, when everything was going for Britain, the London carnivore was deeply unhappy. He had heard too much already about British innovation, about the so-called industrial revolution going on up north, and about the promised bounty of ghost acres. He just didn’t see it. What he really wanted was prime beef and the choicest lamb. No more animals could be fattened on British soil, even on imported grain. European lands were running out of surplus to ship to Britain on account of the growth of their own appetite. Denmark and Ireland were still reliable but both were as close to carrying capacity as the home counties. So … the ghost acres. The Londoner’s problem at mid-century was that livestock shipped 3000 miles from New York suffered significant erosion of quality and weight loss. Put bluntly, it was shit. It did even worse coming 16000 miles from the antipodes. Even the choicest cuts from imported livestock always sold at a significant negative premium against British prime. In any case, the settlement of the Anglo newlands had only just gotten underway.

Over the next half century, human and slaughter-animal (cattle, sheep and pigs) populations of Belich’s Anglo newlands (the American West, Canada, Australia, and New Zealand) would triple, cleared cropland there would quadruple, and pasture would expand by a factor of seven. But most of the meat bounty was destined not for the plate of our insatiable Londoner; urban populations in the Anglo newlands also expanded by a factor of seven (pointedly more in Oceania than Canada). New York in particular developed a voracious appetite for Mid-Western meat, which would soon leave little left over for the mother trade. Deliverance for our hungry Londoner would come in the form of chilled prime beef from Argentina. But I am getting ahead of the story.

Vitals of the British Meat Trade.
Urban population (million)
Britain US CAN AID
1850 8.2 2.6 0.4 1.1
1900 17.4 17.5 2.6 3.4
Population (million)
1850 27.2 23.6 3.2 9.5
1900 41.2 76.4 10.0 11.7
Pasture (thousands of square km)
1850 10.8 26.7 0.7 12.8
1900 14.2 138.3 53.2 33.9
Cattle (million)
1850 2.9 19.0 3.2 22.9
1900 7.5 59.7 13.7 30.0
Pigs (million)
1850 2.1 20.6 0.9 1.2
1900 2.9 51.1 3.4 3.4
Sheep (million)
1850 21.6 6.8 0.5 3.7
1900 21.5 7.6 20.5 4.1
Cropland (thousands of square km)
1850 5.1 23.8 6.3 3.8
1900 7.0 75.6 22.9 6.4
Source: Clio Infra. CAN=Canada, Australia and New Zealand, AID=Argentina, Ireland and Denmark.

Before the meat could be shipped, the slaughter animal had to be fattened. Above all this required the opening of Belich’s Anglo newlands to dense settlement. That in turn required tens of millions of migrants from the Anglo oldlands. But the land had not only to be cleared. Until midcentury, the American interior could not be densely settled on transport networks confined to water. Cincinnati’s water-borne pork hegemony was thus precarious.

Packing MidWest 1840s

It was rail that opened up the American interior to dense settlement. It was rail that created Chicago. It was rail that solved the concrete problem of feeding the Chicago-New York-London pipeline. This central feeder belt of the British meat trade in the 1880s was dependent on rapid transport further inland. The American railway system was financed by London bondholders. British bond finance was also critical in the Dominions proper, as indeed, Argentina. Some £20m of British capital was invested directly in the Argentine meat-packing companies.

But the population history and the rail network weren’t enough. Even if the capacity to produce that much meat is assured, the technical problem of mechanical refrigeration on transoceanic ships had to be mastered. Straightup freezing worked for mutton and lamb. So frozen New Zealand lamb was accepted as prime by our London carnivore when it arrived in the 1880s. After the turn of the century, New Zealand shipped more than one hundred thousand metric tons of frozen lamb every year to Great Britain. Argentinian and Australian lamb provided an additional one hundred thousand. New Zealand would ship an extraordinary one-fifty thousand in 1922. The years after the world war were marked by the violence of British bloodletting in a bid to return to the Gold Standard. But at least our hungry Londoner could score some prime New Zealand lamb. Even the lamb from Argentina and Australia could be top-notch.

British_lamb_trade.png

The unit on the Y axis is metric tons.

But beef did not take well to freezing. For as Perron (1971) explained:

Frozen meat is kept at a temperature of between 14ºF and 18ºF, but between the temperatures of 31ºF and 25ºF large ice crystals form between the muscle fibres of the meat and this process ruptures some of the small vessels of the flesh. When the meat is thawed this gives it a sweaty, discoloured appearance and it loses a certain amount of moisture, making it less juicy when cooked. This effect is more noticeable in large carcases like beef; having a greater bulk than mutton and lamb they take longer to pass through the critical range of temperature where the large ice crystals are formed and the damage done. But meat can also be chilled, that is, kept at a temperature of 30ºF which is just above its freezing point and this means that the ice crystals do not form in the carcase.

The chilling solution (obviously) was articulated by American meatpackings giants. The big four American meat-packers dominated the British chilled beef trade in the 1880s. Meanwhile, the British lamb trade was a definite Kiwi monopoly. The great sucking sound of the London market—London relied disproportionately among British cities on imported meat—had reoriented Belich’s Anglo newlands. The first big suppliers were Belich’s American northwest and New Zealand.

Northwest

There was a major epidemiological panic arising from the discovery of diseased frozen shipments at the turn of the century (that’s the crash in the graph for Beef imports in 1901). This would prove to be a hiccup in the real story: the rise of chilled beef from Argentina that would more than replace the Americans in the British beef trade. Argentina’s market position by the end of the decade outrivaled that of New Zealand’s in the British lamb trade. Of course, British lamb and mutton predominated in the national market. But by 1914, imported meat accounted for 40 percent of British consumption. More than any other great power in history, Great Britain came to rely on ghost acres for its meat.

MeatTrade.png

In the overall scheme of things, our hungry Londoner was finally satiated at the turn of the century when British imports of refrigerated beef, lamb and mutton stabilized in the ballpark of one billion pounds a year. In 1922, the British refrigerated meat trade as a whole peaked (at least locally) at more than a billion pounds (around half a million short tons).

British_meat_trade.png

In the 1890s, Argentina emerged as a major player in the British meat trade. Argentina was the solution to the problem posed by New York’s growing appetite for Chicago beef (increasingly joined by other American cities). In the 1900s, Argentina displaced the United States in the chilled beef trade and emerged as a near-peer of New Zealand in the lamb trade. Already by 1903, Argentina was supplying more refrigerated meat to Britain than any other nation.

British_meat_trade.png

Selected exporters only.

Britain’s refrigerated meat trade could survive the rise of the American carnivore and the reorientation of the American West to point to New York. But this was far from an automatic process. In the six principal suppliers besides the United States, during the second half of the nineteenth century, some 30 million additional people would help clear 270 thousand additional square kilometers of land for pasture and 89 thousand square kilometers more of cropland on three continents; allowing them to raise 30 million more sheep and 45 million more heads of cattle a year. A vast portion of world ecology was thus transformed to suit the taste of the British carnivore. Indeed, New Zealanders replaced their sheep with breeds more attractive to the London palate. Argentinians did the same with cattle. As did Australia and Canada; even old Ireland and Denmark had to keep adapting to Metropolitan tastes. Only the American West served the other pole of the Angloworld. Everyone else served London.

The British refrigerated meat trade began in 1875. By the turn of the century, the supply of meat to Britain had expanded and diversified well beyond the American North-West. It came into its own and lasted until well into the twentieth century. It was only in the 1950s that the British share of New Zealand lamb exports fell below 50 percent. The timing of the core phase of expansion of the refrigerated meat trade, 1880-1910, suggests that we must file this under the secondary industrial revolution. Britain’s ghost acres came to finally bear in the last quarter of the nineteenth century. The increased availability of prime meat may be directly responsible for the vanishing of the settler premium in Anglo-Saxon stature in the early twentieth century.

Stature (cm)
Britain US Canada Australia
1810 169.7 171.5
1820 169.1 172.2 171.5
1830 166.7 173.5 171.5
1840 166.5 172.2 170.4
1850 165.6 171.1 172.5 170.0
1860 166.6 170.6 172.0 170.6
1870 167.2 171.1 171.2 170.1
1880 168.0 169.5 171.2 171.1
1890 167.4 169.1 170.7 171.3
1900 169.4 170.0 169.9 172.3
1910 170.9 172.1 171.5 172.7
1920 171.0 173.1 173.0 172.8
1930 173.9 173.4 172.7
Source: Clio Infra.

This interpretation would be consistent with the evidence from life expectancy. British life expectancy was falling as late as 1870. And it is only after 1900 that it really picks up. No doubt indoor plumbing, penicillin, urban sanitation, and personal hygiene were all implicated in the transformation of everyday living standards recorded in stature and mortality data. But the growth in per capita meat consumption from 91 lbs in 1880 to 131 lbs in 1909-1913 definitely played its part. What this meant in practice was that compared to 1870 our London carnivore was eating meat twice as often on the eve of the world struggle. And not only was the quantity greater, the quality of chilled beef from Argentina and frozen lamb from New Zealand was finally up to the demanding standards of our discerning Londoner.

life_expectancy.png

 

 

Standard
Thinking

Anglo-Saxon Population History and World Power

The German Empire would not be proclaimed until next January, but it was forged in Bismarck’s splendid war against France in 1870. That was also the last year in which Germany would be more populous than the United States. Germany was born in relative demographic decline as a result of the settlement of the American West. In 1870, Greater Britain (Britain, Canada, and Australia—we don’t have data for New Zealand and South Africa)’s population was 37.0m, France’s population was 38.4m, America’s 40.2m, and Germany’s 40.8m. 1870 is really the crossover point of the population scissors. The populations of all four were roughly around 40m in 1870. Over the next forty years, while France’s grew by a mere 7 percent, Greater Britain’s grew by 53 percent, Germany’s by 58 percent. But both were dwarfed by America’s 131 percent. By 1910, France, Greater Britain, Germany, and the US weighed in at 41.2m, 56.5m, 64.6m, and 92.8m respectively.

The stagnation of the French population, the fact that Greater Britain expanded demographically nearly as much as Germany, and German demographic decline relative to the United States, all came to weigh heavily on the world question at the turn of the century. This was especially so because all four great powers were roughly at the technological frontier by the turn of the century. Although the United States had clear leadership, the secondary industrial revolution occurred in all four poles (and beyond). Germany in particular was a major center of innovation in the mechanical arts, leading in many sectors, eg industrial chemistry, heavy industry, and catching up in many where the Americans had led, eg automobiles.

Population in millions
Greater Britain United States Germany France
1870 37.0 40.2 40.8 38.4
1880 41.2 50.5 45.1 39.0
1890 45.5 63.3 49.2 40.0
1900 50.4 76.4 56.0 40.6
1910 56.5 92.8 64.6 41.2

turn_pop.png

Life expectancy is a good measure of everyday living standards. The next figure shows that by this measure, improvement in German living standards accelerated around 1890. But there was no relative improvement in Germany’s position because the Atlantic powers themselves hit the toe of the hockey-stick at the same time. The Anglo-Saxons opened up a gap with France as well. No one was as rich as them or lived as long.

turn_life_expectancy.png

The evidence from stature is even more daunting from the perspective of an aspiring world power. Americans still towered over Germans. Greater Britain and Germany remain close throughout, although with a British edge. France, which was closer to the Anglo-Saxons in life expectancy, lagged behind even though it too participated in the onset of the hockey stick.

turn_stature.png

Note that we have defined the average stature of Greater Britain as the population weighted average of British, Canadian, and Australian means. If you unpack Greater Britain, it turns out that Canadians and especially Australians enjoyed a definite settler premium. See next figure. American supremacy in stature is therefore not surprising. Britons would finally become taller than Americans for the first time in 1930. But that is a story for another day. Instead of unpacking Greater Britain, the challenge is to expand it to encompass not just New Zealand and White British subjects in southern Africa, but Greater Britain in a thick sense: as the predominance of the British diaspora in the offshore world. Proximately what was required was to monopolize prime temperate land in Anglo-Saxon hands; in order to do that, what was required was the take-off of self-reproducing settler colonies; preferably junior geopolitical allies of Belich’s Anglo oldlands (see the schematic map) that could thus anchor the world position of the two Anglo-Saxon great powers.

Settler_premium.png

Screen Shot 2018-10-31 at 3.24.26 AM

From Belich, Replenishing the Earth.

I still haven’t finished reading Belich’s Replenishing the Earth: the Settler Revolution and the Rise of the Anglo-World, 1783-1939, so I will hold my judgement of the first three-fourths of the book. I will say that his description of the wildcat banking and asset price bubbles of the Anglo-Saxon frontier is excellent. I also agree with him that explosive colonization was a bubble by construction. One went all in when one went to settle a fledgling colony. Things worked themselves out once enough talent showed up. Speculators abounded. Increasingly massive boom-bust cycles whipsawed the frontier. Boom towns expanded at prodigious rates; driven by investment booms, unregulated bank lending, furious speculation, and attendant asset price bubbles amid extraordinarily elevated rates of settler arrivals. At the heart of Settlerism itself was a Ponzi scheme; the Anglo-Saxon folie a famille in two senses. As the Anglo-Saxon madness of course. But also the self-accelerating aspect of settler success itself. The booms were in a fundamental sense self-igniting. They solved the problem of coordination through faith. Not just faith in God. But faith in the colony. The frontier attracted the believer like a magnet.

In the American West, there were three major medium term cycles that peaked in 1837, 1857, and 1873. (The last of the great booms peaked in 1893 and 1907.) After each of these busts, Belich argues, the West was re-colonized, reoriented to point towards to metropole; peripheralized via a vertical division of labor—Cincinnati would no longer produce books and periodicals (88,000 books were published in the town in the three months of 1831, Belich reports) but pork and grain; New York would supply the books and periodicals. Fair trade or not, this was the construction process of the Weltwirtschaft.

The topology of the world had been transformed by Anglo-Saxon settlement. Britain’s decline relative to Germany has been overestimated. If we consider the product of life expectancy and population as the measure of war potential instead of GDP which is a product of per capita GDP and population, Britain kept pace with Germany all the way. (We know the picture that emerges from income: Britain’s per capita GDP was 25 percent larger than Germany’s in 1900).

turn_le_times_pop.png

France was guaranteed to be a member of any balancing coalition against Germany. The real question was Anglo-German relations. Given the Anglo-Saxon stranglehold on the maritime world-economy, their naval mastery, and their settlement of all available prime temperate land, there was no solution to the problem of wrestling world control away from Anglo-Saxon hands. Fisher’s ‘five keys that lock up the world’ were Anglo-Saxon property by the time the German Empire was proclaimed. So the German bid to be one of the four world policeman was thwarted by the difficulty of dethroning Great Britain, the Franco-Russian alliance and the problem of two-front war, and above all, the settlement of the American West and the rise of the United States.

Population history is crucial to the Franco-German story, the Anglo-German balance, the rise of the United States to global mastery, and the cul-de-sac of German navalism. But was there no viable path for Germany to become a world power? The missed opportunity of 1905 points in the right direction. Above all, Germany needed to achieve military hegemony on the continent. Russia was out of business and France lay exposed. Navalism came to bite not only in 1914 but also in 1905 when the Kaiser decided to wait for a more favorable naval balance.

Standard
Thinking

How was the German question resolved?

Sovereignty is always shaped from below, and by those who are afraid  — Michel Foucault

Marc Trachtenberg’s A Constructed Peace: The Making of the European Settlement, 1945-1963, won the George Louis Beer Prize as well as the Paul Birdsall Prize.[1] It remains highly regarded in the field. Trachtenberg argues convincingly that the German question was at the heart of postwar international politics and its resolution was the key to the establishment of a stable international system. Since the great powers disagreed so profoundly on what was to be done with Germany and put a great deal of importance on that question, a stable pattern of East-West relations could not obtain until the German question had been settled one way or the other. As long as the German question remained unresolved, the specter of general nuclear war hung over East-West relations. Once it was resolved, the basic parameters of the bipolar world fell into place, East-West relations were stabilized, and the Cold War in effect came to an end.[2]

The entire future of Germany was open for reconsideration when postwar planning began during the war.[3] Was Germany to pay reparations? Was it to be deindustrialized and turned into an agrarian country to reduce its power as envisioned in the Morgenthau Plan? Was Germany to be broken up into smaller statelets? Into two, three, or four pieces? Under whose sphere of influence were these pieces to fall? Who was going to control the Ruhr? Even after they had been agreed upon, were the occupation zones to be run separately by each occupying power as it wished? What was to be their socio-economic system? Were all non-fascist political parties to be tolerated in all zones? Or were the zones merely temporary and a unified German state was to be resurrected? And if so, was the Germany army to be reconstituted? And if German power was to be restored, was Germany going to be neutral or an ally of one of the three great powers?

Given the centrality of the German question to his account of the European postwar settlement, it is perplexing to find Trachtenberg begin his narrative at war’s end, some two years after official three-power negotiations began. A number of important decisions on the German question were in fact hashed out during the war; above all, the territorial division of Germany into occupation zones. The first steps in that direction were taken at the Moscow Conference in October 1943.[4] The British circulated a draft agreement on the zones of occupation on January 15, 1944.[5] On February 18, 1944, the Soviets accepted the British proposal for the eastern zone apparently without bargaining.[6] Why Stalin would accept a division that gave him control of the agrarian third of Germany is not clear.

So Trachtenberg is not interested in how the German question was resolved per se. What Trachtenberg does instead is mobilize it to explain why East-West relations took so long to stabilize. Secretary of State James Byrnes, ‘the real maker of American foreign policy during the early Truman period,’ we are told, pressed for ‘a spheres of influence settlement in Europe’ that the Soviets could get behind, whereby ‘each side would have a free hand in the area it dominated, and on that basis the two sides would be able to get along with each other in the future.’[7]

But a settlement of this sort did not come into being, not until 1963 at any rate. Why was it so long in the coming? Why did the division of Europe not lead directly to a stable international order?[8]

Trachtenberg’s answer is that profound disagreements on the German question prevented the emergence of a stable order. The Soviets were implacably opposed to the resurrection of German power, especially a nuclear-armed Germany, particularly one allied to the West. The US did not want an independent Germany. But the defense of Western Europe ultimately required the reconstitution of the German army. In the end, despite the fact that the Soviet Union had almost single-handedly defeated Hitler, the US was able to get its way on the German question. In effect, the US managed to impose its preferred outcome on the Soviet Union. Why?

Although Trachtenberg does not come right out and say it, the short answer is that the US leveraged its nuclear superiority to get its way on the German question. The first great confrontation was triggered by the introduction of a common currency into the three Western zones in 1948. It meant in effect the creation of a West German state. It is this that triggered the Berlin Crisis. At the time the United States enjoyed a nuclear monopoly, and according to Trachtenberg, ‘as long as it was a question of purely one-sided air-atomic war’ the US was ‘sure to win in the end’.[9] The West could thus afford to stand firm in the face of Soviet pressure. And the Soviets backed down once it became clear that the United States was prepared to go all the way to general nuclear war in order to defend the West’s position in West Berlin.

America responded to the loss of nuclear monopoly in 1949 with an enormous buildup of air-atomic forces. By 1952 the Strategic Air Command had emerged as a war-winning first-strike weapon. It was in this context that the resurrection of the German army was put on the table. Stalin responded by sending the famous March 1952 Note suggesting that the Soviets would be willing to accept a unified Germany with free elections and even a capitalist economic system as long as it was guaranteed to be neutral. The West dismissed the offer as a mere ploy. Trachtenberg, following Gaddis, concurs.[10] But many serious scholars of Soviet foreign policy have argued that the offer was in earnest.[11]

A series of increasingly hostile confrontations occurred in 1958-1962 culminating in the Cuban Missile Crisis, when Khrushchev, emboldened by the Soviet acquisition of ICBMs capable of reaching US cities, decided to force a showdown on the question of the introduction of tactical nuclear weapons into the German army. Astonishingly, Trachtenberg devotes less than three pages to this final confrontation over the German question, concluding that the US ‘laid down an ultimatum’ and the USSR ‘acceded’ without explaining why.[12] As Trachtenberg himself had argued elsewhere,

It really does seem that “we had a gun to their head and they didn’t move a muscle”—that their failure to make any preparations for general war was linked to a fear of provoking American preemptive action. … The effect therefore was to tie their hands, to limit their freedom of maneuver, and thus to increase their incentive to settle the crisis quickly.[13]

The picture that emerges thus calls for a major revision of the account presented in the monograph, one that pays attention to the balance of strategic power as it came to weigh on international politics at crucial moments in the resolution of the German question, 1942-1963.

Notes

[1] Trachtenberg, Marc. A Constructed Peace: The Making of the European Settlement, 1945-1963. Princeton University Press, 1999.

[2] Except for a brief revival under Reagan in the early 1980s. See Stephanson, Anders. “Cold War Degree Zero.” Uncertain Empire: American History and the Idea of the Cold War (2012): 19-50.

[3] This paragraph is adapted from an earlier essay that appeared on my blog.

[4] Mosely, Philip E. “The occupation of Germany: New light on how the zones were drawn.” Foreign Affairs 28, no. 4 (1950): 580-604, p. 580.

[5] Ibid, p. 589. Of course, the British awarded the Ruhr to themselves.

[6] Ibid, p. 591.

[7] Trachtenberg, p. 4.

[8] Ibid.

[9] Trachtenberg, p. 89.

[10] Ibid, p. 129.

[11] See Willging, Paul Raymond. “Soviet foreign policy in the German question: 1950-1955.” (1975): 1199-1199, and references therein.

[12] Trachtenberg, p. 352-355.

[13] Trachtenberg, Marc. History and strategy. Princeton University Press, 1991, p. 259.

Standard