Cognitive Test Scores Measure Net Nutritional Status

At the heart of the racialist imaginary is the notion of the natural hierarchy of the races. Not only are there discrete types of humans, racialism insists, they are differentially endowed. Turn of the century high racialism construed this racial hierarchy in terms of a racial essence. This racial essence was supposed to control men’s character, merit, behavioral propensities, and capacity for refinement and civilization. To be sure, racial essence was thought of as multidimensional. But no educated Westerner at the turn of the century would beg to disagree with the notion that the races could be put into a natural hierarchy.

What explains the hold of high racialism on the turn of the century Western imaginary? Some of it was obviously self-congratulation. But that can’t be the whole story. There were some pretty smart people in the transatlantic world at the turn of the century. Why did they all find high racialism so compelling? Because critical thinkers interested in a question sooner or later find themselves sifting through the scientific literature, part of what needs explanation is the consensus on scientific racialism. Put another way, we should ask why the best-informed of the day bought into high racialism.

Broadly speaking, I think there were three factors at play. First, in the settler colonies and metropoles of the early modern world, migrant populations from far away found themselves living cheek-by-jowl with others. This created a visual reality of discrete variation out of what were in fact smoothly-varying morphologies. What were geographic clines reflecting morphological adaptation to the macroclimate in the Old World appeared to be races in the New World. In effect, early modern population history created a visual reality that begged to be described as a world of discrete races.

Second, and more important, was the weight of the taxonomic understanding of natural history. The hold of the taxonomic paradigm was so strong that it seemed to be the only way to comprehend the bewildering human variation revealed by the collision of the continents. The existence of specific races and their place in the natural hierarchy may be questioned but that racial taxonomy was a useful way to understand human variation was simply taken for granted. Unbeknownst to the best-informed of the day, this was a very strong assumption to make about the world.

Third, and most important, was the sheer weight of the explanandum. What made racial taxonomy so compelling was what it was mobilized to explain: the astonishing scale of global polarization. As Westerners contemplated the human condition at the turn of the century, the dominant fact that cried out for explanation was the highly uneven distribution of wealth and power on earth. It did really look like fate had thrust the responsibility of the world on Anglo-Saxon shoulders; that Europe and its offshoots were vastly more advanced, civilized and powerful that the rest of the world; that Oriental or Russian armies simply couldn’t put up a fight with a European great power; that six thousand Englishmen could rule over hundreds of millions of Indians without fear of getting their throats cut. The most compelling explanation was the most straightforward one. To the sharpest knives in the turn of the century drawer, what explained the polarization of the world was the natural hierarchy of the races.

It is this that distinguishes racialism from racism. The former is fundamentally an explanation of global polarization; the latter is a politico-ethical stance on the social and global order. In principle, it is possible to racialist without being racist but not vice-versa. In practice, however, few racialists could sustain politico-ethical neutrality on race relations.

During the nineteenth century, the discourse of Anglo-Saxon self-congratulation morphed from the traditional mode that saw Anglo-Saxons as blessed by Providence to the notion that they were biologically superior to all other races on earth. Driven by settler colonial racialism, the vision of the colorblind empire was definitely shelved by London in favor of a global racial order after the turn of the century. Things came to a head on the South Africa question where the settlers demanded apartheid. London gave in after a brief struggle. The resolution of the South Africa question in 1906 was a key moment in the articulation of the global color line.

The first real pushback against high racialism came from scholars at Columbia in the 1930s. Franz Boas and his students, most prominently Ruth Benedict, led the charge. They punctured the unchallenged monopoly of high racialism but the larger edifice survived into the Second World War. The discourse of high racialism collided with reality at the hinge of the twentieth century. As Operation Barbarossa began, Western statesman and intelligence agencies without exception expected the Soviet Union to collapse under the German onslaught in a matter of weeks. If France capitulated in six weeks, how could the Slav be expected to stand up to the Teuton for much longer? That the Slav could defeat the Teuton was practically unthinkable in the high racialist imaginary. Not only did the Soviet Union not collapse, it went on to single-handedly crush what was regarded as the greatest army the world had ever seen. This was because Stalinism proved to be a superior machine civilization than Hitlerism where it mattered—where it has always mattered to the West—on the battlefield. The Slav could indeed defeat the Teuton. The evidence from the battlefield required an unthinkable revision of the natural hierarchy of the races, directly antithetical to the core of the racialist imaginary, ie Germanic racial supremacism.

It would seem that Auschwitz, that great trauma of modernity, more than anything else pushed racialism beyond the pale. If so, it took surprisingly long. It was not until the sixties that racial taxonomy became unacceptable in the scientific discourse. Recall that it was in 1962 that Coon was booed and jeered at the annual meeting of the American Association of Physical Anthropologists, an association that had until quite recently been the real home of American scientific racialism. The anti-systemic turn of the sixties opened the floodgates to radical critiques of the mid-century social order and the attendant conceptual baggage, including a still-pervasive racialism.

It took decades before racialism was pushed beyond the boundaries of acceptable discourse. But by the end of the century a definite discipline came to be exercised in Western public spheres. In the Ivory Tower, a consensus had emerged that races did not reflect biological reality but were rather social constructs with all-too-often violent consequences. Whatever systematic differences that did exist between populations were considered to be trivial and/or irrelevant to understanding the social order. This consensus continues to hold the center even though it is fraying at the margins.

In fact, one can date the rise of neoracialism quite precisely. This was the publication of Murray and Herrnstein’s The Bell Curve in 1994. Although most of the book examined intelligence test scores exclusively for non-Hispanic White Americans and explored the implications of relentless cognitive sorting on the social order, critics jumped on the single chapter that replicated known results on racial differences in IQ. (Responding to the hullabaloo the American Psychological Association came out with a factbook on intelligence that was largely consistent with the main empirical claims of the book.) Herrnstein passed away around the time when the book came out. But, ever since then, Murray has been hounded by protestors every time he makes a public appearance. At Middlebury College last year, a mob attacked Murray and his interviewer, Professor Allison Stanger, who suffered a concussion after someone grabbed her hair and twisted her neck. I think we must see this aggressive policing of the Overton Window (the boundary of the acceptable discourse) as the defining condition of what I call neoracialism. It is above all a counter-discourse. Those espousing these ideas feel themselves to be under siege; as indeed they are.

Neoracialism retains the taxonomic paradigm of high racialism but it is not simply the reemergence of high racialism. For neoracialism is tied to two hegemonic ideas of the present that were nonexistent back when high racialism had the field to itself.

The first of these is the fetishization of IQ. The test score is not simply seen as a predictor of academic performance, for which there is ample evidence. (For the evidence from the international cross-section see Figure 1). It is seen much more expansively as a test of overall merit; as if humans were motor-engines and the tests were measuring horsepower. The fetish is near-universal in Western society; right up there with salary, the size of the house, and financial net worth. It is an impoverished view of man, sidelining arguably more important aspects of the human character: passion, curiosity, compassion, integrity, honesty, fair-mindedness, civility, and so on.

fig1_test_score_EA.png
Figure 2. Source: Lynn and Meisenberg (2017).

The second hegemonic idea is the blind acceptance of the reductionist paradigm. Basically, behavior is reduced to biology and biology to genetics. Both are dangerous fallacies. The first reduction is laughable in light of what may be called the first fundamental theorem of paleoanthropology: What defines modern humans is behavioral plasticity, versatility, and dynamism untethered to human biology. In other words, modern humans are modern precisely in as much as their behavior is not predictable by biology.

The reduction of biology to genetics is equally nonsensical in light of what may be called the first fundamental theorem of epigenetics: Phenotypic variation cannot be reduced to genetics, and indeed, even the environment. For even after controlling for both there is substantial biological variation left unexplained. Not only is there substantial phenotypic variation among monozygotic twins (those who have identical genomes), even genetically-cloned microbes cultured in identical environments display significant phenotypic variation. The only way to make sense of this is to posit that subtle stochastic factors perturb the expression of the blueprint contained in DNA even under identical environmental conditions. This makes mincemeat out of the already philosophically-tenuous paradigm of reductionism.

So neoracialism is a counter-discourse in contemporary history that is rigidly in the grip of the three fallacies: that racial taxonomy gives us a good handle on human variation, that IQ is the master variable of modern society and the prime metric of social worth, and that DNA is the controlling code of the human lifeworld à la Dawkins. Because the last two are much more broadly shared across Western society, including much of the Left, the critique of neoracialism has been relatively ineffective.

But beyond the rigidities of the contemporary discourse, there is a bigger reason for the rise of neoracialism. Simply put, racialism was marginalized without replacement. The explanatory work that racialism was doing in making sense of the world was left undone. No alternate compelling explanation for global polarization was offered. Instead, under the banner of Modernization, population differences were simply assumed to be temporary and expected to vanish in short order under the onslaught of Progress. Indeed, even discussion of global polarization became vaguely racist and therefore unacceptable in polite company. With the nearly-uniform failure of the mid-century dream of Modernization, the door was thus left ajar for the resurrection of essentialist racial taxonomy to do the same explanatory work it had always performed. It is the absence of a scientific consensus on a broad explanatory frame for human polarization that is the key permissive condition for neoracialism.

A scientific consensus more powerful that neoracialism, based on thermoregulatory imperatives, is emerging that ties systematic morphological variation between contemporary populations to the Pleistocene paleoclimate on the one hand, and contemporary everyday living standards (nutrition, disease burdens, thermal burdens) on the other. Disentangling the two has been my obsession for a while. I finally found what those in the know already knew. Basic parameters of the human skeleton are adapted to the paleoclimate.

At the same time as these developments in paleoanthropology and economic history, recent progress in ancient-DNA research has highlighted the importance of population history. I tried to bring the paleoanthropology and population history literature into conversation by showing how population history explains European skeletal morphology over the past thirty thousand years. My argument is based on known facts about the paleoclimate during the Late Pleistocene and known facts about population history. The paleoclimate is the structure and population history is the dynamic variable. It is that which allows us to predict dynamics in Late Pleistocene body size variables. We were of course forced into this explanatory strategy by the brute fact that population history and the paleoclimate are the main explanatory variables available for the Pleistocene.

I do not mean to imply that technology and organization did not causally affect human morphology, eg we have ample evidence of bilateral asymmetry in arm length as an adaptation to the spear-thrower. But all such adaptations are superstructure over the basic structure of human skeleton that reflects morphological adaptation to the paleoclimate of the Pleistocene that began 2.6 Ma. In Eurasia in particular, it reflects adaptation to the macroclimate after the dispersal of Anatomically Modern Humans from Africa 130-50 Ka. Because the Late Pleistocene, 130-10 Ka, is so long compared to the length of time since the Secondary Products Revolution 5 Ka, and especially the Secondary Industrial Revolution 0.1 Ka, and despite the possibility that evolution may have accelerated in the historical era, the Late Pleistocene dominates the slowest-moving variables of the human skeleton. Indeed, I have shown that pelvic bone width and femur head diameter reflect adaptation to the paleoclimate of the region where the population spent the Late Pleistocene.


I feel that economic historians have been barking up the wrong tree. The basic problem with almost all narratives of the Great Divergence (as the historians frame it) or the exit from the Malthusian Trap (as the economists would have it) is that the British Industrial Revolution, 1760-1830, does not revolutionize everyday living standards in England. This is easy to demonstrate empirically whether one relies on per capita income, stature, or life expectancy. In general, the economic, anthropometric, and actuarial data is consistent with a very late exit from the Malthusian world; the hockey stick is a story of the 20th century.

The evidence is rather consistent with the hypothesis that the extraordinary polarization of living standards across the globe is a function of the differential spread of the secondary industrial revolution, 1870-1970, (senso stricto: the generalized application of powered machinery to work on farms, factory floors, construction sites, shipping, and so on; senso lato: the application of science and technology to the general problem of production and reproduction). So proximately, what needs to be explained is the spread of the secondary industrial revolution. Specifically, the main explanandum is this: Why is there a significant gradient of output per worker (and hence per capita income) along latitude? Why can’t tropical nations simply import the machinery necessary to increase their productivity to within the ballpark of temperate industrial nations and thereby corner the bulk of global production? Despite the wage bonus and the ‘second unbundling’, global production has failed to rebalance to the tropics. Why??

I proposed a simple framework that tied output per worker to the rate of intensity of the work performed on the same machine; and the rate of intensity of work performed to the thermal environment of the farm, factory floor, construction site, dockyard and so on—in accordance with the human thermal balance equation. This was not very original—the claim is consistent with known results in the physiological and ergonomics literature. What I am saying in effect is that the difference is not so much biology, education, or culture. To put it bluntly, educated and disciplined male, White, Anglo-Saxon workers from the MidWest would not be able to sustain the intensity of work performed on the same machine at the same rate in Bangladesh as in Illinois. Like the Bangladeshis, they would have to take frequent breaks and work less so as not to overheat. This mechanically translates into lower productivity and hence lower per capita income.

I appreciate the increasing attention to thermal burdens in light of global warming. Recently, Upshot had a fascinating report tying gun violence outdoors but not indoors (!!!) to temperature spikes. Earlier, in an extraordinary study, Harvard’s Goodman tied students’ test scores to the thermal burden on the day of the test. That goes some way towards explaining the gradient of latitude in the international cross-section of test scores—an uncomfortable empirical fact well outside the Overton Window that neoracialists insistently point to as empirical “proof” of the relevance of racial taxonomy to understanding the global order. We’ll return to the empirical evidence from the correlates of test scores presently.


Following in the footsteps of Murray and Herrnstein, Richard Lynn published The Global Bell Curve in 2008. It went to the heart of the matter. Here, global polarization is tied precisely to test scores. Some populations are rich and powerful, and others are poor and weak because, we are told, the former are cognitively more endowed than the latter. That’s the master narrative offered here. One finds different versions in other neoracialist accounts. Rushton claimed racial differences in cranial capacity, that we debunked. Wade finds racial taxonomy more persuasive than the geographic clines favored by geneticists. In what he calls his more speculative chapters, Wade does the full double reduction: differences in behavioral patterns are mobilized to explain the world order and DNA is mobilized to explain behavioral patterns. Gene-culture coevolution and other speculations are thrown around to explain global polarization.

The heart of neoracialism isn’t, What’s the controlling variable for human variation per se. The question at the heart of neoracialism is, What’s the controlling variable for human variation that is relevant to the social order, the global order, the manifest and multiple hierarchies of our lifeworld? A presumed innate hierarchy of the races in general ability is doing all the work in neoracialism for it is mobilized to explain all of global polarization in one fell swoop. Neoracialism looks for a master variable that explains the presumed rank ordering of human societies. Whence the fetishization of IQ (thought to be ultimately controlled by DNA, although all efforts to explain test scores by DNA have been frustrated). In the minds of neoracialists and those who are tempted to join them, it is test scores that explain the cross-section of per capita income. A lot is thus at stake in that equation. That’s the context of Lynn’s The Global Bell Curve.

The rigidities of the liberal discourse have meant that a very fruitful way of thinking about systematic variation in the test scores of human populations have been overlooked. We argue that test scores contain information on everyday living standards. Put simply, they are a substitute for per capita income, stature, or life expectancy. They measure net nutritional status which is a function of nutritional intake and expenditure on thermoregulation, work, and fighting disease. (Net nutritional status is just jargon for the vicious feedback loop between nutrition and disease; they must be considered jointly.) We show this by showing that the best predictors of test scores are the Infant Mortality Rate and animal protein (dairy, eggs and meat) intake. More generally, we show that all metrics of net nutritional status are strong predictors of test scores.

While it may be conceivable that variation in cognitive ability explains variation in per capita income, given the universal availability of modern medicine, the claim that variation in cognitive ability explains variation in the Infant Mortality Rate is really tenuous. Given the empirical correlation we document below, it is much more plausible that tropical disease burdens suppress test scores than vice-versa. In other words, it makes no sense to infer that the racial hierarchy supposedly revealed by test scores explains disease burdens, but it make ample sense to infer that disease burdens explain test scores. This is the crucial wedge of our intervention.


We begin our empirical analysis by noting the Heliocentric pattern of test scores. Table 1 displays Spearman’s rank correlation coefficients for test scores on the one hand and absolute latitude and Effective Temperatures on the other. Spearman’s coefficient is a distribution-free, robust estimator of the population correlation coefficient (r) and more powerful than Pearson’s coefficient. Effective Temperature is computed from maximum and minimum monthly averages via the formula in Binford (2001): ET=(18*max-10*min)./(max-min+8), where the max and min temperatures are expressed in Celsius. ET is meant to capture the basic thermal parameter of the macroclimate.

Table 1. Heliocentric polarization in test scores.
Spearman’s rank correlation coefficients.
N=86 IQ test score (measured) IQ test score (estimated) Educational Attainment
Absolute latitude 0.65 0.65 0.63
Effective Temperature -0.64 -0.62 -0.59
Source: Lynn and Meisenberg (2017), Trading Economics (2018), Binford (2001), author’s computations. Estimates in bold are significant at the 1 percent level. 

Note that Effective Temperature is just a function of absolute latitude (r=-0.949, p<0.001). Our estimate of the correlation coefficient between absolute latitude and measured IQ test scores is large and significant (r=0.654, p<0.001), implying a gradient so large that moving 10 degrees away from the equator increases expected test scores by 4 points. Effective Temperature is also a strong correlate of measured IQ (r=-0.639, p<0.001), implying that an increase in Effective Temperature by just 5 degrees reduces expected test scores by 11 points. The fundamental question for psychometry then is, What explains these gradients?

Answering this question requires pinning down the proximate causal structure of test scores. We argue that test scores measure net nutritional status. Table 2 marshals the evidence. We see that all measures of net nutritional status (Infant Mortality Rate, animal protein intake per capita, life expectancy, stature, protein intake per capita, and calorie intake per capita) are strong correlates of test scores. The strongest is Infant Mortality Rate (r=-0.859, p<0.001) which captures the vicious feedback-loop between nutrition and disease burdens. By itself, Infant Mortality Rate explains three-fourths of the variation in measured test scores reported by Lynn and Meisenberg (2017). The results are robust to using estimated test scores or Educational Attainment instead of measured test scores.

Table 2. Pairwise correlates of test scores.
Spearman’s rank correlation coefficients.
IQ test score (measured) IQ test score (estimated) Educational Attainment
Infant Mortality Rate (log) -0.86 -0.85 -0.84
Animal protein intake per capita 0.80 0.76 0.76
Life expectancy 0.76 0.68 0.70
Stature 0.74 0.74 0.73
Per capita income (log) 0.68 0.59 0.74
Protein intake per capita 0.64 0.82 0.63
Calorie intake per capita 0.54 0.67 0.57
Source: Lynn and Meisenberg (2017), World Bank (2014), Trading Economics (2018), FAO (2018), author’s computations. Estimates in bold are significant at the 1 percent level. 
fig2_IMR_test_score.png
Figure 2. Infant mortality rate (World Bank, 2014) predicts test scores (Lynn and Meisenberg, 2017).

Our estimate for the correlation between animal protein intake per capita and measured test scores is also extremely large (r=0.802, p<0.001). Astonishingly, each additional gram of animal protein intake per capita increases expected test scores by 0.4 points. By itself, animal protein intake explains two-thirds of the international variation in mean test scores. Although not as strong, calorie intake per capita (r=0.541, p<0.001) and protein intake per capita (r=0.649, p<0.001) are also strong correlates of test scores. The pattern suggests that the lower test scores of poor countries reflect lack of access to high-quality foods like eggs, dairy and meat.

ap_controls_test_score.png
Figure 3. Animal protein (FAO, 2018) predicts test scores (Lynn and Meisenberg, 2017).

The main import of the extremely high correlations between test scores on the one hand and Infant Mortality Rate (r=-0.859, p<0.001) and per capita protein intake (r=0.802, p<0.001) on the other is clear: Health insults control investment in cognitive ability. Energy and nutrition that could be channeled towards cognitive ability have to be diverted to dealing with health insults arising jointly from malnutrition and disease.

We have checked that stature is much more plastic than pelvic bone width. And we have shown that the divergence in stature is a story of the 20th century, ie it carries information of modern polarization. The strong correlation between test scores and stature (r=0.760, p<0.001) therefore suggests that test scores also contain information on modern polarization. The strength of the correlation between test scores and life expectancy (r=0.761, p<0.001) reinforces this interpretation.

stature_test_score.png
Source: Lynn and Meisenberg (2017), Clio Infra (2018).

What Table 2 shows is that systematic variation in test scores between populations is a function of systematic variation in net nutritional status. The correlations make no sense if neoracialism is approximately correct, but they make ample sense if test scores reflect net nutritional status. If a country has low test scores you can be somewhat confident that it is poor (R^2=44%) but you can be much more confident that it faces malnutrition (R^2=64%) and especially high disease burdens (R^2=74%). This implies that the causal vector points the other way, from polarization to test scores. Far from explaining global polarization as in the high racialist imaginary, test scores are explained by inequalities in everyday living standards. The evidence from psychometry adds to other evidence of global polarization from economics, anthropometry, and demography that continues to demand explanation.

We have suggested that the current radio silence over systematic variation in test scores fosters neoracialism. We must break this silence and talk openly and honestly about such questions lest we leave the interpretation of these patterns to neoracialists. More generally, an effective rebuttal of neoracialism requires a more compelling explanation of global polarization. Given the discursive hegemony of science, I want to persuade progressives that this requires taking science as the point of departure. My wager is that a much more compelling picture is indeed emerging from the science itself that explains global polarization, and more generally, systematic variation in human morphology and performance, not in terms of racial taxonomy but rather in terms of the Heliocentric geometry of our lifeworld that structures thermoregulatory, metabolic, and epidemiological imperatives faced by situated populations.

11 thoughts on “Cognitive Test Scores Measure Net Nutritional Status

  1. “Not only did the Soviet Union not collapse, it went on to single-handedly crush what was regarded as the greatest army the world had ever seen. ”
    While I like the post, this is historical nonsense. Without extensive Western Allied (British, then American) support, the Nazi’s would have defeated the Soviet Union. The campaign in Yugoslavia and Greece chewed up crucial panzer units and fighting on Crete decimated the paratroop division. The Arctic convoys provided material that was crucial in the defence of Moscow, which was the logistical lynchpin of the Western front. The Allied bombing campaign on Germany diverted the Luftwaffe from the Eastern Front and thousands of 88mm guns, which would have otherwise been stopping Soviet tanks. The Western Allies provided a large proportion of the trucks and almost all the new rolling stock and enough food to provide (from memory) about one meal per day per Soviet citizen. The Soviet fightback, and then advance, was very impressive, but not remotely “single-handed”.

  2. Having now read the linked paper on Western perceptions of Soviet strength a few caveats.
    (1) Not mentioning the death toll from the “terroristic control of the countryside” looks a little disingenuous, particularly when the paper enthuses over growth stats.
    (2) Soviet wartime morale was in large part because of the break from internal terrorism. Personal accounts of the period again and again talk about how very different perceptions were from the mass fear that had previously operated. The release from this fear plus the common patriotic project made the experience of The Great Patriotic War quite different that what followed from before (and, indeed, after). Note that the regime itself also changed its framing of action–hence “Great Patriotic War”.
    (3) Soviet efforts were entirely concentrated against Nazi Germany, the reverse was not true.
    (4) Cultural analysis is not necessarily the same as race analysis. I get that race is how Americans avoid talking about class and culture. But, even in the US, there were always competing discourses than a straightforwardly racialist one. This was even more true in Britain, where members of imperial elite involvement in local cultures was a much bigger theme. The “ornamentalist” element in a monarchial-aristocratic empire was significant–hence the King of Tonga being put in front of the Crown Prince of Germany in a Jubilee procession because he was, well, a King and not a Crown Prince.
    (5) The Battle of Warsaw in 1920 affected Western perceptions of the Soviet army that the purges and their difficulties in the Winter War then further reinforced.
    (6) The Nazi labour force, as it included impressed/enslaved subject workers, was not anywhere as consistently motivated as the Soviet labour force.
    (7) There was an element of “adding inputs” in the Stalinist industrial miracle–notably moving labour from farm to factory. It did not, after all, prove to be a sustainable path over the long term. If Mancur Olson (in his posthumous Power and Prosperity) is correct, this is because purges were necessary for the long term effectiveness of the system by breaking up networks and providing information flows to the centre. Though the Kim Family Regime casts some doubt on that.
    These are caveats, however, the paper itself is an impressive piece of work.

  3. As for the post itself, as a really despise race talk, I really liked the post. You might be interested in Frederik de Boer’s new book, it seems very much congruent with what you are saying.

    1. Interesting post.

      I am only interested in this subject as a layman. I am not a stastician (though I do have a mathematical background, so I can understand many of the issues involved.)

      I am not an American, so I don’t have any particular interest in “racialism” or “liberal discourse” etc. (except insofar as objects of curiosity). I am interested in the effects of DNA on human behaviour.

      One comment you made about test scores and IQ is the following:

      “Whence the fetishization of IQ (thought to be ultimately controlled by DNA, although all efforts to explain test scores by DNA have been frustrated).”

      AFAIK, this is not true. Polygenic risk scores are a very useful tool to predict educational attainment. I don’t have the paper in front of me right now, but one study looked at the polygenic risk scores of a cohort. People in the bottom fifth of the educational-achievement polygenic risk scores had a 10% chance of graduation, while the top fifth had a 55% chance of graduation.

      The reason, as I understand it, of why it’s very hard to directly connect IQ to DNA is that intelligence is polygenic, meaning that it is affected by thousands of DNA differences. Teasing out the effects requires extremely large sample sizes, and even then it’s hard to disentangle the issues. The methods which we have available today explain only a fraction of the variance, but they still explains something (see above), and the methods are getting better all the time.

      My knowledge about this mostly comes from Robert Plomin’s book “Blueprint”.

      1. Plomin is at the extreme end of over-enthusiasm over polygenic scores. The massive effort to reduce IQ to genetics has been a total failure. Half of IQ may be heritable. But all hopes put into identifying the genetic basis for intelligence were dashed. «From the 1990s until 2017 no replicable associations were found. GPS from these early GWAS, which we refer to as ‘IQ1’, predicted only 1% of the variance of intelligence in independent samples.»

        https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5985927/

        With a million snips, the authors tell us, we may be able to predict 10 percent of the variation in cognitive ability. Oh yeah? Here’s a short list of things that are stronger predictors of IQ: birthweight, adult height, systolic blood pressure and practically any other measure of health status; head circumference and face width; cranial volume, gray matter volume and especially the size of the prefrontal cortex; throwing accuracy and bilateral asymmetry; myopia; risk-appetite — I am sure I am missing a whole bunch of features. Is this all the geneticists/psychometricians have to show for the work over the past thirty years? Are they really serious about the pat on the back? Or do they recognize the abject failure and are putting up a brave face? I suspect it is the latter.

        The fact that polygenic scores for educational attainment give us a better handle on IQ than polygenic scores for IQ itself is an interesting puzzle. But it does nothing to compensate for the abject failure to explain the genetic heritability of IQ through DNA.

        1. Sorry for this very late reply.

          You are correct that GWAS studies only predict 10% of the variation. This number will only increase as larger samples are corrected.

          Your alternate measures, it seems to me, are potentially confounded by genetics (as Plomin notes in his paper of a similar measure, socioeconomic status). Is there any study of, say, nutritional status, which tries to disentangle the effect of genetics on the measure?

            1. I read the post, but I don’t understand what that has to do with Plomin. He isn’t even mentioned in it. There are some interesting things in the linked post, which I can discuss there if you like.

              The post tries to look at the phylogenetic distances as a proxy for “race”. But we don’t need to have “racial” differences. We can simply have individual differences (modulated by genetics). Perhaps an example will help to illustrate the point.

              Another commenter mentioned Freddie deBoer’s book above. The following example comes from his book:

              Suppose we observe LeBron James’s son, and he is good at basketball. If we speculate from this observation that he’s good because he’s black, that would be a racist argument (and wrong). But if we speculate that he’s good because he’s LeBron’s son, that argument is not racist: it’s an argument about parentage. There’s nothing implausible or racist about his son having, say, above average height or physique, traits which he partly got from his father. (Again, this argument is just for illustration, not a proof of anything).

              Coming back to the issue of the measure confounded by genetics, perhaps one way to study the issue would be to look at adoption studies. Many people adopt more than one child. Adopted children in the same home would likely have similar nutritional status, but they would not share the genetic endowment. Or we could look at studies of fraternal and monozygotic twins.

              From what I know, environmental factors don’t usually give any significant independent explanation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s