Thinking

Effective Constraints on Sovereign Borrowers

Krugman traces the idea masquerading as theory (“Modern Monetary Theory”) to Abba Lerner’s “functional finance” doctrine from 1943:

His argument was that countries that (a) rely on fiat money they control and (b) don’t borrow in someone else’s currency don’t face any debt constraints, because they can always print money to service their debt. What they face, instead, is an inflation constraint: too much fiscal stimulus will cause an overheating economy. So their budget policies should be entirely focused on getting the level of aggregate demand right: the budget deficit should be big enough to produce full employment, but no so big as to produce inflationary overheating.

Simply put, the idea is that sovereigns with obligations in their own fiat currency are constrained only by inflation in how much debt they can pile up. Krugman points to the potential problem of snowballing debt whereby debt servicing claims a larger and larger portion of the public purse. But this is a function of interest rates:

If r<g, which is true now and has mostly been true in the past, the level of debt really isn’t too much of an issue. But if r>g you do have the possibility of a debt snowball: the higher the ratio of debt to GDP the faster, other things equal, that ratio will grow. And debt can’t go to infinity — it can’t exceed total wealth, and in fact as debt gets ever higher people will demand ever-increasing returns to hold it.

So here we have another effective constraint besides inflation. How much debt sovereigns may pile up is a function of the compensation demanded by investors. Now this compensation is not uniform across borrowers. Far from it. Figure 1 displays spreads against the German bund for selected sovereigns.

Spreads.png

Figure 1. Selected sovereign bond spreads.

The United States is much further along in the monetary cycle than the eurozone. But why do Italy and Portugal have to pay so much higher to access capital markets than Germany? The next figure shows that there is no relationship between debt-to-GDP ratios and sovereign bond yields. The rank correlation coefficient is not only insignificant but bears the wrong sign (r=-0.15, p=0.40). Restricting the sample to so-called emerging markets does not affect the result (r=-0.04, p=0.82). Note that we have already excluded Argentina (debt ratio of 57% and bond yields at 26%) and Japan (debt ratio 253% and yield 0%) since they are clear outliers. So there is simply no evidence than bond markets pay much attention to the debt burden of sovereigns.

MMT.png

Figure 2. The data is from 2017.

So how do bond markets judge sovereign borrowers? The short answer is that yields compensate bondholders for a number of perceived risks. Sovereigns may default, inflation may erode the value of the bond, exchange rate movements may impose losses on bondholders, interest rates may rise and thereby reduce the value of their bond. Moreover, bond yields reflect compensation for not just the expected value of the bond but also for the risk that the value may deteriorate, if for no other reason than that markets can be fickle (so that tomorrow you may not be able to sell your bond for the price you paid for it even if the price you paid was considered by all to be fair today). All these risks are constantly reevaluated by markets. The diachronic pattern is controlled by the market price of risk, itself a function of risk appetite in global markets. The synchronic pattern on the other hand is controlled by the status of sovereigns.

Some sovereigns are regarded by bond markets as safe asset providers. I have identified some safe-asset providers in Figure 2. The main one missing is Japan which would be far out to the right and bottom. Because the sovereign debt of safe asset providers is perceived to be credit default-remote and relatively protected against inflation and exchange rate movements, these assets can serve as collateral in the wholesale funding flywheel, the core of global financial intermediation. It is the practices of institutional players in this ecosystem that determines who is and who is not a safe asset provider. Safe assets can be identified by what happens to yields when the market as a whole tanks. The diagnostic pattern is that when shit hits the ceiling safe assets go up in value as investors flee to safety.

Hélène Rey has identified the curse of the regional safe asset providers. These are small countries whose debt is regarded as safe in wholesale banking practice. Even if their central banks would like to push up yields (say to defend their currency or fight inflation), adverse market developments may send them tumbling down. This is what happened to the Swiss central bank in 2015. Regional safe asset providers

… face a variant of the old ‘Triffin dilemma’: faced with a surge in the demand for their (safe) assets, regional safe asset providers must choose between increasing their external exposure, or letting their currency appreciate. In the former case, the increased exposure can generate potentially large valuation losses in the event of a global crisis…. In the limit, as the exposure grows, it could even threaten the fiscal capacity of the regional safe asset provider, or the loss absorbing capacity of its central bank, leading to a run equilibrium. Alternatively, a regional safe asset provider may choose to limit its exposure, i.e. the supply of its safe assets. The surge in demand then translates into an appreciation of the domestic currency which may adversely impact the real economy, especially the tradable sector. The smaller the  regional safe asset provider is, the less palatable either of these alternatives is likely to be, a phenomenon we dub the ‘curse of the regional safe asset provider.’

Large safe asset providers on the other hand are not so cursed precisely because of their size. But the more important point for our purposes is that big safe asset providers (Germany, Japan, and above all, the United States) are not, and in fact, cannot, be punished by bond markets for fiscal profligacy. The reason for that is the structural shortage of safe assets in the global financial system.

Screen Shot 2019-02-13 at 2.39.40 PM.png

Figure 3. The consumption-to-wealth ratio, a measure of the financial cycle, predicts global real rates. Both are computed as US-UK wealth-weighted averages. Source: Farooqui (2016).

The issue is not whether “MMT” holds in some toy model. The issue is to what degree sovereigns are disciplined by the bond market. The answer to that question depends on their structural position that is in turn determined by wholesale banking practice. The big safe asset providers — the United States, Germany, and Japan — face considerable slack in bond market discipline because the world can’t get enough of their debt. Sovereigns not thus privileged are more exposed to bond market discipline. They may indeed have to worry about market perceptions of their public finances.

With inflation still pretty much dead and policy rates pretty much still on the floor, there is simply no case to be made for fiscal discipline for the big safe asset providers. In effect, big safe asset providers are not debt constrained. And this state of affairs will continue until the global financial system is transformed beyond recognition. It is in this sense I believe that Adam Tooze champions “MMT.”

I am not suggesting that President Warren should go on a debt binge. But worrying about bond market discipline for fiscal profligacy is to worry about precisely the wrong problem. The United States can afford to double its outstanding debt-to-GDP ratio from 100 to 200 percent and will still be perceived as less risky than Japan.

 

Advertisements
Standard
Thinking

Phylogeny from Craniometrics

As we saw in the previous dispatch, different craniofacial characters are variously under the control of neutral drift, sexual selection, and thermoregulatory adaptation to the paleoclimate. I suggested that kosher inference of phylogeny (ie lineage) is difficult because the population history signal is confounded by natural selection. One way to go about this would be to control for dimorphism and absolute latitude. That doesn’t seem to work. One gets nonsense trees. Might not there be another way?

The law of large numbers dictates that if we average over a large number of characters, second-order factors will get averaged away thereby revealing the dominant, first-order term. There is good reason to believe that the controlling factor for cranial morphology is neutral drift. As we shall, this turns out to be right. In what follows we shall obtain phylogenetic trees from Hanihara’s and Howells’ craniometric datasets. The idea is to standardize all characters to have mean 0 and variance 1, and use distance measures between populations in this space to back out the underlying phylogenetic relationships.

HaniharaTree.png

Figure 1. Phylogeny from craniofacial characters. Source: Hanihara (2000), author’s computations.

Figure 1 displays the phylogenetic tree obtained from the Hanihara sample. See from the top-right, at the base of the tree. The first cluster of 7 correctly identifies western Eurasian phylogeny: eastern and western Europe are closest to western Asia; Europe, north Africa, and western Asia split last from northern and southern India. The second cluster of Sahul, Negrito and southern Africa identifies the really ancient populations. The split between this group and others is correctly identified as having great time-depth. The bottom supercluster correctly identifies the eastern world, although the tree places the East-West split as having as great a time-depth as the San-Sahul split (which is known to be older). But the subdivisions within the eastern supercluster are correctly identified with the possible exception of the close phylogenetic relationship between circumpolar peoples and Polynesians. All in all, not bad for just half-a-dozen craniofacial characters.

The Howells dataset has 82 linear measurements each for 2,524 from thirty populations. The sample is thus wide enough for the averaging method to really work. Figure 2 displays the longitude and latitude for the populations in the sample. This will help us identify whether the inferred tree makes sense.

HowellDemes.png

Figure 2. Locations of demes in the Howells sample.

HowellsTree

Figure 3. Phylogeny from craniometrics. Source: Howells Craniometric Dataset, author’s computations.

The accuracy of the inferred phylogenetic tree is simply astonishing. Read from bottom-right. The Sahul peoples (Australia, Tasmania, and Tolai) are correctly identified as having split last from Bushman and Zulu; the great-time depth of the Andamanese and its close relationship to the ancient clade likewise. (Perhaps it is best to mentally pull the ancient cluster to the left and place them at the root of the tree.) The precise pattern of the eastern cluster (the next 13 above the Andamanese) is exactly right. The phylogenetic relationships in the Austronesian cluster from New Zealand to Easter Islands (a result of recent Holocene expansions) are correctly identified in detail, as is the cluster’s close phylogenetic relationship with the eastern cluster. Further up, the Americans (Peru, Santa Cruz, Arikara) are mixed up with medieval Austro-Hungarians (Berg, Zalavar). Similarly, while the circumpolar peoples are identified as having recently split, as are Dogon and Tieta, they are placed next to each other and to the ancient Egyptians and the medieval Norse.  This may be because of the diachronic patterns in morphology, such as those we identified in the European case. In any case, every recent split in the phylogram (the last of the forks) is accurate. The great time-depth of the ancient cluster is spot-on.

So the algorithm does an excellent job in predicting phylogeny. Although second and third-level branches are sometimes confounded (particularly for ancient and medieval populations so this is presumably due to diachronic patterns). But what is clear is that craniometric distance is an excellent signal of phylogeny, suggesting both that cranial characters are under tight genetic control and that neutral drift is the controlling factor in craniometric variation.

Finally, we check that postcranial osteometric data does not predict phylogeny as well. Presumably this is because either postcranial skeletal morphology is not under tight genetic control, or it is so but neutral drift is not the factor controlling osteometric variation. Whatever the case may be, as shall see, the population history signal in the postcranial skeleton is relatively weak and easily confounded by selection.

OsteometricsTree.png

Figure 4. Phylogeny from osteometrics. Source: Goldman Data Set, author’s computations.

We look at the Goldman osteometric dataset. We restrict the sample to the Old World and apply the same algorithm as before. Figure 4 displays the phylogram thus obtained. Although most recent splits are not too far off the mark, the prediction is unimpressive. Malaysia ends up with the Europeans, Australia with Madagascar, Tasmania and South Africa with China and the Philippines, and the Andamanese with the Congolese and the Indonesians! The algorithm thus fails catastrophically in predicting phylogenetic relationships between populations.

In sum, kosher inference of phylogeny is possible from craniometrics but not from osteometrics. This suggests that the former is under tighter genetic control than the latter, and/or contains a stronger population history signal that is less confounded by natural selection.


Postscript. The Howells phylogram can be improved by including the second moments. I figured out that the reason Hanihara’s craniofacial data yields such a convincing phylogram is because it contains both means and variances of the measurements. The second moments also contain phylogenetic information since the variance of metric characters and the second moment of the frequency of neutrally-derived discrete polymorphisms is a function of distance from Africa. Indeed, the inclusion solves the problem in our previous estimate.

HowellsTree.png

Figure 5. A more precise phylogram from craniometrics. (Corrected version.)

The new estimate is hard to argue with. The Teita and Dogon are correctly identified as closest to Zulu and Bushman. All others are descendents of this ancient cluster of demes around the San. The San-Sahul split occurs at great-time depth, followed by the ancient split with the Andaman Islanders. The medieval Austro-Hungarians (Berg and Zalavar) are accurately placed closest to the medieval Norse and the ancient Egyptians. The Americans (Arikara, Santa Cruz, and Peru) are correctly showed as closely related to eastern Eurasians (Japanese, Atayal, Hainan, and the Philippines). The Austronesians are placed close to the Anyang and the circumpolar peoples (Eskimo and Buriat). Although the algorithm does get the time-depth of the most ancient splits wrong. The ancient subtree including the San and the Andamanese has the greatest time-depth (60ka), the Euro-Asian split has the second greatest time-depth (45ka), then you have the Americans and the circumpolar people splitting off (14ka), and finally the Austronesians split off from Taiwan (5ka). The phylogram, although highly accurate in detail, gets the greatest time-depths catastrophically wrong.

Standard
Thinking

Do Craniofacial Characters Reflect Neutral Drift, Climatic Adaptation, or Sexual Selection?

New York University Primatologist James Higham delivered an extraordinarily interesting talk on primate reproductive ecology at the Natural History museum this week. He pointed out that otherwise morphologically quite similar primate species can coexist sympatrically without interbreeding — some half a dozen of them in one particular rainforest. The vast bulk of the sexual signaling is carried out by facial characters. All primates know who is conspecific — a potential sex partner — and who is not through an exquisitely subtle sensitivity to facial characters. It is an extraordinary fact that no two people who aren’t identical twins look alike; that we can recognize thousands, perhaps millions, of distinct faces; that we never forget a face even if the name of the acquaintance slips us. Across the primate order, we are tuned in to extremely subtle differences in facial characters.

I asked him after the talk what is a good signal for a character to be under sexual selection. He told me something that resolved a longstanding problem for the Policy Tensor. The signal, he told me, is sexual dimorphism. If a character is dimorphic, it’s a good bet that it is under sexual selection. We have established that scale parameters of the human skeleton (pelvic bone width, femur head diameter, skull size) reflect thermoregulatory adaptation to the paleoclimate where the population spent the Upper Pleistocene. We decomposed cranial variation into that due to neutral drift and bioclimatic adaptation by projecting it onto distance from Africa (Khoi-San) and ET (or absolute latitude). If Higham is right, and I do believe he is, we can use dimorphism as the signal for sexual selection. So we can then attempt a three way decomposition. In what follows, we’ll examine a number of craniofacial shape variables or indices that control for scale. This will allow us to test the three causal vectors and identify characters that are neutrally derived, under selection for thermoregulatory control, or under sexual selection.

There are no races. There are demes or situated breeding populations that exhibit derived characters due to a combination of relative isolation (particularly among geographic isolates) and natural selection (adaptive or sexual). I have played Lewontin’s game of apportionment whereby one shows that race dummies explain a negligible portion of the phenotypic or genotypic variation. I am sure you are bored of it. So here’s another proof.

There is systematic (ie interdemic) variation in the frequency of certain alleles, in the frequency of blood groups, in the root structure of molars, and other polymorphisms (hair color, eye color). Many have been claimed to be racial characters. Yet, none of these map onto each other. Other characters covary smoothly—skin pigmentation (Gloger’s Rule), body size (Bergmann’s Rule), relative length of appendages (Allen’s Rule)—because they reflect climatic adaptation. None of this variability can be explained by positing that mankind has allopatric subspecies or continental races.

The malaria/sickle cell anemia connection is a good example that is often cited as a Black/Bantu racial character. Yet that’s not what the frequency distribution shows (see next figure). For instance, the high frequency of the allele in Bengal and southeastern India is inconsistent with the coding of this trait as a Bantu character. 

SickleCell.jpg

Take another so-called racial character. Racial anthropologists claimed for a century that a diagnostic character of the Australian race was their large teeth. It is true that Australian molars show larger crown diameters on average relative to other continents. But there is substantial systematic variation within Australia. Demes in southwestern Australia have massive molars; those in the central, southeast, northwest and northeast regions do not. Positing the existence of the Australian race turns out to be actively misleading in understanding morphological variation. That’s pretty much the case with every so-called racial character. 

Even committed racial anthropologists were compelled to recognize the primacy of what they called subraces. In reality, what we have are thousands of demes that show significant and interesting variation. This variation is the explanandum of postracial physical anthropology. 

Hanihara (2000) looked at variation in a number of craniofacial characters. We start off by looking at his shape indices. Figure 1 shows the infraglabellar index, which captures the relative size of the infraglabellar notch (GOL/NOL in Howell’s labels), the distance between the nose and the brow. We see that it is roughly proportional to distance from Africa, with the geographic isolate Sahul standing out. This suggests that this character may be neutral. As we shall see later, our intuition is right.

A different pattern emerges with the gnathic index (BPL/BNL) that measures prognathism or how much the jaws protrude. We see that this is a derived condition in demes in both Africa and Sahul (making it what’s called a homoplasy). The pattern suggests that it emerged at opposite ends of the earth for different reasons, or at least under the control of different genes (similar to loss of skin pigmentation in north Asians and Europeans which has a different genetic basis).

Figure 3 displays the frontal flatness index (NAS/FMB) that measures the flatness of the face. Again this appears to be a derived condition in Eastern Eurasians. Although there is massive variation in this character within Eastern Eurasia—more than everyone else put together! We shall see this character is under strong sexual selection in some Asian demes.

The Hanihara sample contains only male crania. In order to test our hypotheses, we must turn to the good old Howells Craniometric Dataset. Before we do that, I just want to show that distance from southern Africa (computed using the Haversine formula and known waypoints) gives us good control over the infraglabellar index. See next figure. Our estimated correlation coefficient is very large and significant (r=0.577, p<0.0001) suggesting that this trait is neutral, ie not under natural selection.

We begin by looking at sexual dimorphism in craniofacial characters. Table 1 displays sexual dimorphism indices for a number of craniofacial indices. The variability is astonishing. All characters are dimorphic in some demes, suggesting they may be under sexual selection. The posterior craniofacial index (ASB/ZYB), the transverse craniofacial index (ZYB/XCB) and the Simotic flatness index (SIS/WNB) are dimorphic in the vast majority of populations in the Howells sample. These three characters may well be under sexual selection very broadly across the human race. 

Table 1. Dimorphism indices. 
 DimorphismNumber of demes dimorphicPercentage of demes dimorphic
Gnathic Index0.991519%
Posterior Index0.9682492%
Transverse Index1.0392596%
Upperfacial Index0.992519%
Nasal Index0.978727%
Orbital Index0.9801038%
Frontal Flatness Index0.980519%
Orbital Flatness Index0.980312%
Maxillary Index0.98014%
Nasodacryal Index0.980831%
Simotic Index1.1921973%
Source: Howells Craniometric Dataset, author’s computations.

Instead of using dimorphism indices directly, we shall use the t-Statistic for the test of equality of means between the sexes as the population level as the signal. And instead of deluging you with a barrage of estimates, I’ll present my final estimates from the Howells cross-section. Basically, the idea is that if we project the variation in these characters onto distance from southern Africa, absolute latitude, and our measure of dimorphism (the t-Statistic for gender equality in the character at the population level) along the cross-section, we should be able to get a handle on which of the three variables controls which character. 

Table 2. Standardized coefficients (tStat)
 Distance from AfricaAbsolute latitudeDimorphismR-squared
Gnathic Index0.12-1.86-0.290.15
Posterior Index-3.862.26-0.600.47
Transverse craniofacial Index1.28-1.03-2.450.38
Upperfacial Index0.091.43-2.100.23
Nasal Index-2.74-2.70-2.270.54
Orbital Index1.96-0.74-1.420.19
Frontal Flatness Index-1.381.04-3.560.41
Orbital Flatness Index-1.29-0.09-1.790.21
Maxillary Flatness Index1.61-0.29-2.290.28
Nasodacryal Index0.842.63-2.710.38
Simotic Index2.933.05-1.820.63
Infraglabellar Index2.20-0.57-0.070.21
Source: Howells Craniometric Dataset, author’s computations. Coefficients in bold are significant at the 5 percent level.

We can see from the results reported in Table 2 that the transverse craniofacial index, the upperfacial index, the frontal flatness index, and the maxillary flatness index are evidently exclusively under sexual selection since they are correlated with our measure of dimorphism but not distance from Africa or absolute latitude.

Thermoregulatory adaptations seem implicated in the shape of the back of the head (posterior craniofacial index), the relative width of the nose (nasal index), and the nasodacryal and simotic indices. Neutral drift is implicated too. It controls the infraglabellar index alone, and the simotic and the posterior indices jointly with absolute latitude. 

Recall from Table 1 that both the upperfacial index and the frontal index are dimorphic in only 5 demes each in the 30-deme Howells Craniometric Dataset. Going back to the interpretation of the Hanihara (2000) data, the “Asian” frontal flatness business is thus revealed as a derived character that owes to the definitely cultural phenomena of sexual selection (tStat=-3.56) in some populations; precisely which ones we cannot say because the Hanihara (2000) dataset contains data on only male crania so we cannot compute dimorphism metrics. Dimorphism controls half of the dozen characters examined in the present study; distance from Africa and absolute latitude control a third each. In the Venn diagram of control, the sole character in the intersection of all three is the nasal index, the relative width of the nose.

What the below plot suggests is that craniofacial flatness is under very strong sexual selection, suggesting that this is what is going on in some Asian demes as captured by Figure 2. So we have causal vectors pointing “the wrong way” (in the reductionist paradigm), from Society to Nature. Surely, sexual selection in our species is a cultural phenomena. Sexual selection is lived by situated populations. It is reenacted through the articulation and disarticulation of stable desiderata in the eye of the beholders. The disciplinary force of cultural selection acts directly on the sexual economy by structuring the eye of the beholder at the level of the situated populations. Discourse and Reality cannot but cointegrate.

FrontalFlatness.png

Finally, Table 3 present our apportionment of craniofacial variation in terms of our three predictors. 

Table 3. Apportionment of craniofacial variation.
 Neutral driftBioclimatic adaptationSexual selection
Gnathic Index0.1%13.5%0.3%
Posterior Craniofacial Index35.2%12.0%0.9%
Transverse Craniofacial Index5.3%3.4%19.6%
Upperfacial Index0.0%7.2%15.5%
Nasal Index17.9%17.4%12.3%
Orbital Index13.5%1.9%7.1%
Frontal Flatness Index5.1%2.9%33.7%
Orbital Flatness Index6.2%0.0%11.9%
Maxillary Index8.6%0.3%17.6%
Nasodacryal Index1.9%18.7%19.9%
Simotic Index19.8%21.6%7.6%
Infraglabellar Index17.9%1.2%0.0%
Source: Howells Craniometric Dataset, author’s computations. Estimates in bold are significant at 5 percent. 

What is clear from Table 3 is that sexual selection is a potent force shaping human craniofacial morphology. The upper face, the nose, and the flatness of the face are all under sexual selection in some demes. The important thing to remember is that sexual selection, like neutral drift and climatic adaptation, works at the level of the situated populations. What this means is that one cannot infer population history directly from phenotypic characters; one must control for sexual selection and climatic adaptation. To wit, if population A looks more similar to population B than C it may not be because C split from A and B first and then A split from B. For it may be that A and C split away from B first but A and B acquired the same characters as a result of adaptation to the macroclimate (as happened with skin pigmentation) or as a result of the same character coming under sexual selection in A and B but not C. Put another way, natural selection (climatic, sexual, or whatever) confounds the population history signal in phenotypic characters. So we must be careful. 


Postscript. It seems that my A,B,C example was not clear. I am banging on this drum because this issue has been ignored for more than a century now; first, as a result of the mental rigidities associated with essentialist racial taxonomy; and later, as a result of DNA supremacism whereby scholars drank the Kool-Aid and resolved to explain the heavens and everything under them through molecular anthropology. For a century now, physical anthropologists have reasoned back from systematic variation in morphology and genetics to phylogeny (who split when from whom — the tree of descent). This approach was correctly used to infer that Americans were closer to Asians than Europeans so that the American-Asian split happened after the European-Asian split. But backing out population history from phenotypic or genotypic distance metrics does not always work, above all because the population history signal is more often than not confounded by natural selection. So if you are using phenotypic characters or genomic sequences to make inferences about population history, you better be careful. In order to make kosher inferences you have to control for stuff that is under selection. In other words, What matters is not overall genetic distance between populations but neutral distance — that’s what contains information about population history and phylogeny. We have seen that dimorphism gives us a good handle on sexual selection in craniofacial morphology. 

Standard
Thinking

Progressive Tax Proposals Are Quite Modest

Warren.jpeg
Photo credit: Wall Street Journal.

Saez and Zucman estimate conservatively that Warren’s wealth tax proposal would raise $2.75 trillion over the next decade. If you redo the same calculation without bending over backward the expected haul is $3.1 trillion. The arithmetic is straightforward. There are 78,000 households with net worth greater than $50m (the top 0.06 percent of 130m US households) and 900 households with net worth greater than $1 billion. Total taxable wealth above these thresholds is $10.5 and $3.1 trillion (Cf. total wealth of American households $94 trillion). 2 percent of the former is $210 billion and 1 percent of the latter is $31 billion, for an annual haul of $241 billion, which under the CBO’s baseline projections is multiplied by a factor of 13 to get the tax revenue over the next decade (nifty rule of thumb), 13*$241=$3,133 billion.

So the Piketty-Warren wealth tax will, if enforced with Nordic efficiency, raise $3 trillion dollars from the superrich. How onerous will this burden be on them? Will it be punitive or bearable?

The total annual tax burden on the 0.1 percent richest American households is 3.2 percent of their wealth, Saez and Zucman note, compared to 7.2 percent for the bottom 99 percent. With the wealth tax, the tax burden of the superrich would rise to 4.3 percent of their wealth. By this measure, they would still be better off than most Americans.

Moreover, the superrich also get higher returns than the merely rich, as they are likely to even after the wealth tax:

The estimates by Saez and Zucman show that, from 1980 to 2016, real wealth of the top 0.1% has grown at 5.3% per year on average, which is 2.8 points above the average real wealth growth of 2.5% per year. Average real wealth of the Forbes 400 has grown even faster at 7% per year, 4.5 points above the average. The historical gap in growth rates of top wealth vs. average wealth is larger than the proposed wealth tax. Therefore, even with the wealth tax, it is most likely that top wealth would continue to grow at least as fast as the average.

 

So the burden of the Piketty-Warren wealth tax on the superrich is modest. It would still leave them in a better position that those less well placed than themselves. Adding AOC’s 70 percent tax on incomes above $10m, would raise some $72 billion in 2019 and an estimated $936 billion over the next decade. With both these taxes, the total burden on the superrich in the aggregate would be 4.6 percent of their wealth.

As as third leg of the triad, we could add Obama’s 2016 proposal for increasing the capital-gains tax to the same level as that of earned income. That would raise $240 billion over the next ten years. With the full triad the burden on the superrich would still be a relatively modest 4.9 percent of their wealth.

The modesty of the proposals is not all bad. It enhances the legitimacy of the taxes and makes their enforcement easier. Saez and Zucman assume that the superrich will be able to successfully hide 15 percent of their wealth from the taxman. This average is driven by the Swiss outlier. As they explain in footnote 2, the tax avoidance rates in Sweden and Denmark were dramatically lower: just 0.5 percent. Moreover, the arm of American law is the longest of them all. Compliance should not be a problem as long as the political will exists. As Nicholas Mulder has suggested, the abuse of offshore secrecy jurisdictions can be tackled by redirecting the vast apparatus of sanctions enforcement to crack down on tax avoidance.

Standard
Thinking

Who is the Type Specimen for Homo sapiens?

Excerpt from Rob DeSalle and Ian Tattersall’s excellent Troublesome Science: The Misuse of Genetics in Understanding Race (2018).


Linnaeus was entirely correct in assuming we should know who we are (and, even more importantly, who we want to reproduce with). But complications have arisen anyway. In the interests of practicality, modern taxonomists have amplified the concept of the type specimen far beyond that of the holotypes from which original descriptions are drawn. “Allotypes,” “neotypes,” “syntypes,” and “lectotypes,” among many others, can be invoked when a researcher identifying a type encounters some procedural hitch. For instance, when a holotype is lost (which happens occasionally, because of bombing, bad curation of a collection, or other causes), there are rules to govern its replacement by a lectotype, which will now be the “go-to” specimen.

Since Linnaeus didn’t designate one, there is and never was a holotype for Homo sapiens. Complicating matters is the fact that in the definitive tenth edition of his great work Linnaeus also described the six variants of Homo sapiens we mentioned earlier—Ferus, Americanus, Europaeus, Asiaticus, Afer (African), and Monstrosus. The first and last can be discarded as valid subspecies names, as they do not describe real specimens (Ferus was used to designate feral children, and Monstrosus was used to denote mythical people with strange morphologies.)

Under current taxonomic rules, there is one subspecies missing here, as a result of applying the principle of coordination (article 43 of the ICZN). This states that one subspecies of any subdivided species must bear the species name. What this means is that, if we are to recognize subspecies within Homo sapiens, we need to add the subspecies Homo sapiens sapiens—logically the subspecies Linnaeus had in mind when he wrote his description—to the other three.

In 1959 William Stearns, a taxonomist writing about Linnaeus’s legacy, suggested that Linnaeus himself should be the type specimen for Homo sapiens sapiens (the name that under the new rules automatically replaces H. s. europaeus). And, given Linnaeus’s estimable opinion of himself, it is certainly not out of the question that he had himself in mind when he gave our species its name. Since we don’t know for sure who he had in mind as exemplar, though, he would have to be designated as a lectotype to satisfy Stearns’s proposal. Of course, this would not help very much, because Linnaeus currently reposes in a churchyard in Uppsala, and it is not really practical for any contemporary taxonomist actually to use him as a standard of comparison.

Availability is a key consideration, and technically it became a factor some thirty years after Stearns made his original suggestion, when a group of researchers hoping to honor Edward Drinker Cope (and apparently unaware of Stearns’s proposal) suggested that Cope be designated the type specimen for Homo sapiens. Cope was a famous paleontologist who had willed his bones to science in the hope that he would be designated the type specimen (again, technically the lectotype) of Homo sapiens. And it certainly seems that Cope himself had wanted that: a possibly apocryphal story goes that a visitor to Cope’s laboratory shortly after the latter’s death in 1897 found his long-time technician weeping in front of a boiling preparation vat as Cope’s head periodically bobbed to the surface.

For anyone who cared, having two pretenders to the status of type specimen for Homo sapiens created a legally awkward situation that demanded resolution. Fortunately, the ICZN was up to the job. One rule states that a neotype can be assigned to a specimen if the lectotype is lost; and this might have given Cope’s bones a fighting chance as a neotype. But while Linnaeus is not exactly available, he is not exactly lost, for we know where he is. And his claim is reinforced by a couple of other provisions of the ICZN. One of them is that article 74.1 of the code happens to require that any lectotype must be among the specimens examined by the person who named the species. Linnaeus was long dead when Cope was born in 1840, so Cope could not have been “examined” by the namer. Also, the key article (74.1.1) clearly states the principle of priority under which validly proposed earlier names trump names put forward later. So unless Linnaeus’s name for our species is somehow deemed invalid—which, even in an uncertain world, is not going to happen—Cope’s claim doesn’t stand a chance. Meanwhile, Homo sapiens still lacks a usable type specimen.

This digression nicely illustrates the fact that, while systematists will always legitimately disagree, nomenclature is underpinned by an objective set of rules that we must apply in retrospect to Linnaeus’s four nonimaginary variants of Homo sapiens. The Swedish savant used highly typological reasoning to come up with what we would have to call subspecies, although he probably thought of them as races. And typologically, Linnaeus clearly felt justified in designating his four “real” subspecies based on geography and the skin color of the people involved. He also added some behavioral descriptions that he felt were diagnostic. These were pretty much in line with common European suppositions of the eighteenth century, and paramount were the ways in which the various groups controlled their behaviors. The “rufus” (red) Homo sapiens americanus from the New World used custom to govern its behavior; the “albus” (white) europaeus was governed by laws; the “luridus” (sallow) asiaticus from Asia was opinionated; and the “niger” (black) afer from Africa was impulsive. This blatantly racist and typological view of humans was hardly unusual for its time, and it remains significant as one of the first attempts to systematize the differences between human geographic groups.

In technical terms, Linnaeus’s trinomina stood until 1825, when Jean-Baptiste Bory de St. Vincent decided to elevate the subspecies names to species level and to add a raft of other regional populations to the genus Homo as separate species. But—to cut short a very long story that you can read about in our Race? Debunking a Scientific Myth—later experts have synonymized all of these with Homo sapiens, so that today no living Homo sapiens subspecies are recognized. Even Homo sapiens sapiens is entirely superfluous, since we have nothing to distinguish it from. Today, then, while we must revere Linnaeus for his achievements as a taxonomist, we must also admit that his splitting of the species Homo sapiens into geographic subspecies was the start of a hugely problematic—and hard to reverse or stamp out—trend toward the formal classification of human individuals and populations. Our colleague Jon Marks has suggested that this desire for racial classification has had an even greater impact on our modern life than the binominal system itself.

Standard
Thinking

The Superstar Firm Dilemma

Screen Shot 2019-01-27 at 12.19.42 PM.png

The Economist recently estimated the global pool of supernormal profits (profits in excess of an assumed 12 percent hurdle rate) at $660 billion. This is essentially a transatlantic phenomena; an astounding 98 percent of these excess profits accrue to a handful of American and European firms. In the United States, a third is cornered by firms with legal and regulatory moats such as healthcare, military contracting, and other non-tradables. But two-thirds go to firms in industries with no such artificial barriers to entry (tech and other tradables) and that are more or less fully exposed to competitors from across the world. Why aren’t these rents bid away by the entry of hungrier rivals?

A major part of the answer is network externalities that turn the logic of market competition over its head. To wit, if everyone is on Facebook you have to be on Facebook too if you want to connect with others. This applies to everyone on Facebook. Another social network trying to break through Facebook’s moat, even one unambiguously superior to the surveillance platform, faces the insurmountable hurdle of getting everyone to coordinate their move. The result is a natural monopoly.

Yet, that can’t be the whole story. Most firms that exhibit persistent rents do not operate in an industry with decisive network externalities or regulatory moats. Amazon is an attractive platform for people trying to buy and sell stuff. And this part of Amazon’s business indeed exhibits network externalities. But that business is not a cash cow for Amazon; third-party sales account for only 17 percent of Amazon’s top line. Instead, Amazon makes most of its money by selling stuff directly to customers. Jeff Bezos is now the richest man in the world with an estimated net worth in excess of a hundred billion dollars.

The secret of Amazon’s success is that it is the world leader in supply chain management. Three billion products are shipped worldwide by Amazon every month. The Balance calls the firm’s operations the most efficient in the world:

The combination of sophisticated information technology, an extensive network of warehouses, multi-tier inventory management and excellent transportation makes Amazon’s supply chain the most efficient among all the major companies in the world.

Not only is Amazon the front-runner, it has been innovating faster than the laggards. From the same source:

The rate of Amazon’s innovations in supply chain management has been mesmerizing. The rate of change has been incredible, making it difficult for lower volume competitors to keep up.

Amazon is an example of superstar firms that corner markets not so much due to structural moats but largely due to their technological leadership. An even more clear-cut case is that of Apple. The firm’s virtual monopoly in high-end personal computing is not due to any moat. It is rather due to the fact that hungry global rivals have failed to produce superior personal computing machines.This may be true more generally.

Even firms selling to the state that could be described as enjoying a moat may also derive much of their rents from technical leadership; eg, aircraft manufacturers. Unlike Facebook, Google does not enjoy significant network externalities. Despite Facebook’s impregnable moat, the mass surveillance business is a near-duopoly. Why hasn’t Facebook crushed Google in digital ad-spend? The simple answer is that Google’s position in the surveillance market is built on its dominating position in Search.

Firms at the cutting-edge of know-how are containers of situated communities of skilled practice. Google’s leadership in machine learning/AI came organically out of the nature of Search. Financial pressure after the dot com bust explains why Google wanted to pioneer surveillance capitalism. It does not explain why it was able to do so. I wager that Google was in a position to innovate because Search served as a generative research agenda for the situated community at Google. Faced with the mighty ocean of surveillance data, they had to invent probes to interrogate it. That’s how Google came to invent Big Data. The firm’s mission ‘to organize the world’s information’ came directly out of Search. As did Google’s partnership with the intelligence community and Obama.

Market competition can be expected to reward the leaders handsomely. But the surplus accumulates automatically as long as the superiority lasts. This is not inconsistent with the interest of the consumer qua consumer as long as efficiency gains are passed on in the form of a bigger consumer surplus. But as subjects of surveillance capitalism, that is quite a different matter.

The main problem with superstar firms is not that they slow innovation or harm consumers — they don’t. It is that they drive wage inequality. Furman and Orszag have shown that inter-firm earnings dispersion rather than within-firm dispersion has driven income inequality. Moreover, for all the talk of global firms, the situated communities of skilled practice for which these firms serve as containers are located firmly in what a recent McKinsey report calls the fifty global superstar cities. That concentration of talent and income is what is driving regional polarization in the United States. The analysts find ‘a higher churn rate among superstar firms compared with cities, indicating higher levels of persistence among superstar cities.’

Screen Shot 2019-01-27 at 11.14.03 PM.png

The report notes that the distribution of economic profit obeys a power curve with the top 10 percent of firms capturing 80 percent of the profits accruing to all firms (with revenues above a billion) and the top 1 percent capture 36 percent. Not too long ago it was not quite so polarized:

Over the past 20 years, the gap has widened between superstar firms and median firms, and also between the bottom 10 percent and median firms. Today’s superstar firms have 1.6 times more economic profit on average than superstar firms 20 years ago. … The growth of economic profit at the top end of the distribution is thus mirrored at the bottom end by growing and increasingly persistent economic losses, suggesting that in addition to firm specific dynamics, a broader macroeconomic dynamic may be at work.

Superstar firms are concentrated in superstar sectors. They find that the shift in global profits to superstar sectors amounted to $3 trillion dollars in 2017 alone across the G20.

We find that 70 percent of gains in gross value added and gross operating surplus have accrued to establishments in just a handful of sectors over the past 20 years. This is in contrast to previous decades, in which gains were spread over a wider range of sectors. … [These sectors] include financial services, professional services, real estate, and two smaller (in gross value-added and gross operating-surplus terms) but rapidly gaining sectors: pharmaceuticals and medical products, and internet, media, and software.

While these sectors not only have ‘fewer fixed capital and labor inputs, more intangible inputs, and higher levels of digital adoption’, they are ‘two to three times more skill-intensive’ and have ‘relatively higher R&D intensity and lower capital and labor intensity’. But ‘the higher returns in superstar sectors accrue more to corporate surplus rather than labor’ and ‘their gains are more geographically concentrated compared with sectors in relative decline. For instance, gains to internet, media, and software activities are captured by just 10 percent of US counties, which account for 90 percent of GDP in that sector.’

City size is known to obey the power law. But could increasing concentration in superstar firms and superstar sectors be driving a concentration of talent and money in superstar cities?

The 50 cities account for 8 percent of global population, 21 percent of world GDP, 37 percent of urban high-income households, and 45 percent of headquarters of firms with more than $1 billion in annual revenue. The average GDP per capita in these cities is 45 percent higher than that of peers in the same region and income group, and the gap has grown over the past decade.

The analysts go on to speculate that the three — superstar firms, sectors, and cities — may be linked and mutually reinforcing:

We find linkages between firms, sectors, and cities that may be reinforcing superstar status and that raise the question of whether a “superstar ecosystem” exists. For example, superstar sectors generate surplus mostly to corporations rather than to labor, driving a geographically concentrated wealth effect in superstar cities with a disproportionate share of asset management activity and high-income-household investors. Labor gains from superstar sectors are also concentrated in narrow geographic footprints within countries, often in superstar cities and accrue mostly to high-skill workers.

So superstar firms, even if their market positions are well-earned and consistent with consumer and geoeconomic interests, are responsible for increasing vertical and regional polarization of wealth and income. Seeking a more equitable distribution of money may demand a more vigorous antitrust regime. But the strategy of breaking them up or otherwise pulling them down is inconsistent with the geopolitical imperative to foster innovation. Helping laggards catch up with firms at the technological frontier may sound like sound industrial policy. But championing the laggards may not be a sound geopolitical strategy against rivals championing their leaders. Put bluntly, superstar firms are the winners of the global economy; you want as many as possible in your jurisdiction and you want them to innovate faster than anyone else. That’s the reality facing mayors, governors, and presidents. Whence the red carpets and the special offers.

A more promising solution to the challenge of wage polarization and geoeconomic strategy is highly-skilled immigration. US immigration policy should serve the national interest. The current regime of spinning the wheel blindly between all applicants is absurd. It has resulted in the capture of the annual pool by tech firms who only care about head count. They can thus afford to make 1000 applications if they want 100 software engineers. This is not true of any firm wanting to hire a particularly skilled individual. It would be better to ration the mandated number of H1Bs by compensation — that would automatically tend to reduce high incomes. A more focused strategy would be to calibrate migration by targeting superstar sectors. Where incomes are offensively high, eg tech and finance, firms should be allowed to secure talent from the uttermost ends of the earth. It would thus serve as a market mechanism to temper wage premia. And if the goal is seriously compete with China in the long run, the United States may have to consider substantially expanding skilled immigration well beyond superstar sectors.

We need to think more seriously about the consequences of our policy designs. The solution I propose is attractive in that it exploits the market mechanism to secure an important social democratic goal. Whether it can work should be explored in finer detail. But I believe it deserves serious consideration.

Standard
Thinking

World Domination is Beyond Google’s Pay Grade

17bookzuboff1-superjumbo

Shoshana Zuboff, ‘one of our most prescient and profound thinkers on the rise of the digital,’ Andrew Keen informs us on the blurb, has rendered ‘a book of immense ambition and erudition’. Naomi Klein thinks ‘everyone should read this book as an act of digital self-defense’. Joseph Turow concludes that from now on, ‘all serious writing on the internet and society will have to take into account The Age of Surveillance Capitalism.’  Robert B. Reich’s blurb calls it ‘a masterpiece of rare conceptual daring’, wherein she ‘demonstrates the unprecedented challenges to human autonomy, social solidarity, and democracy perpetrated by this rogue capitalism.’

It is hard not to concur with them all. Zuboff has identified a distinct regime of accumulation that emerged from the wreckage of the Internet stock bubble when first Google and then Facebook articulated the lucrative commodification of mass surveillance data. In doing so they have enclosed a vast new frontier without permission.  Zuboff goes so as to declare that surveillance data (“behavioral surplus” is her neologism)  is a fourth fictional commodity after Polanyi’s Land, Labor and Money (p. 100):

Today’s owners of surveillance capital have declared a fourth fictional commodity expropriated from the experiential realities of human beings whose bodies, thoughts, and feelings are as virgin and blameless as nature’s once-plentiful meadows and forests before they fell to the market dynamic. In this new logic, human experience is subjugated to surveillance capitalism’s market mechanisms and reborn as “behavior.” These behaviors are rendered into data, ready to take their place in a numberless queue that feeds the machines for fabrication into predictions and eventual exchange in the new behavioral futures markets.… In this future we are exiles from our own behavior, denied access to or control over knowledge derived from its dispossession by others for others. Knowledge, authority, and power rest with surveillance capital, for which we are merely “human natural resources.”

Her narrative is at its most powerful when later in the book she ties Facebook’s addiction machine to Natasha Dow Schüll’s masterpiece on machine gambling in Las Vegas. After documenting the scale of addiction to the social network and the corporation’s efforts towards that instrumental goal, she notes chillingly (p. 466),

All those outlays of genius and money are devoted to this one goal of keeping users, especially young users, plastered to the social mirror like bugs on the windshield.

41fvoWQfwfL._SX331_BO1,204,203,200_.jpg

But Zuboff’s account suffers from two flaws. The first, non-fatal flaw is that she drops the ball on a number of important threads that ought to have been more vigorously pursued. We get bare glimpses of the ecosystem of surveillance capitalism and great power intelligence agencies on the inside of the one-way mirror. Ditto the Obama-Google revolving door. The second, fatal flaw is that she drops the ball on the very mechanism of accumulation she identifies with such clarity. Zuboff brackets such potentially fruitful interrogation into the technical discourse of the high-priests of machine learning—presumably beyond her and the reader’s comprehension. That is very unfortunate. So on the prospects of solving the problem of causal inference from mass surveillance data, we don’t hear a word. She takes the 52 data scientists she interviewed at their word when they echo Silicon Valley’s visions of absolute power.

The fundamental problem with Zuboff’s account then is that she actually buys Skinner’s presumption that sufficiently high resolution surveillance data can and will allow those on the right side of the panopticon to predict the behavior of those on the wrong side with near-perfect fidelity. In reality, whether the problem, and it is a scientific problem, is even solvable in principle is not known. Indeed, there are good reasons to believe that it is not.

In fact, the same kind of problem plagues machine learning, molecular anthropology, and brain science (see Marsh’s sober and excellent take). In all three fields, predicting human behavior from fine-scale Big Data (respectively from mass surveillance, DNA and brain activity) has defeated all attempts. What you have in all cases is a massive torrent of data with very little by way of maps.

Imagine that you could identify the life histories of all fish in the world ocean right down to the last detail and suppose that you have access to free and unbounded computational resources (an impossibility). Even so, there is no reason to believe that there exist algorithms that could be discovered (whether by machines or human geniuses) that will allow you to predict with certainty which fish in a given school the shark will catch. All that may be (and is known to be) achievable is probabilistic statements (say relating the probability of being caught to the position of the fish in the school). Arbitrary amount of data cannot solve the problem of coming up with a theory of fish behavior, even if such a theory were to exist—not that we have any good reason to believe that it does.

As David Deutsch has argued, all scientific problems require specific solutions whose existence cannot simply be assumed. Even assuming that the problems are solvable in principle, we cannot know how many years, decades, centuries, or millennia it will take to solve them. Radical uncertainty is the calling card of original scientific research. What all this means is that it is entirely possible, nay likely, that almost all hopes for predicting human behavior with high fidelity projected onto surveillance data, DNA, brain scans, and so on, may very well be dashed. Simply put, Google cannot predict the next sentence I will write — even if it trains its algorithms on all texts ever authored by the human race including myself — because it does not have a theory of my mind, something that cannot be reverse engineered from surveillance data.

The underlying reason for sober pessimism on all such questions is well understood. Put simply, the reductionist paradigm — and all such hopes are based on it — is known to be of limited value in light of the overwhelming evidence for emergent complexity. Higher order open systems, where all the interesting causal vectors are located, defeat the search for a general theory in detail. That is, each higher level of analysis has its own properties and internal logic that simply cannot be identified at lower levels of analysis.  And high fidelity prediction requires identifying the correct causal structure at all relevant levels of analysis simultaneously. But we have no reason to believe that we can, or ever will be able to, identify all relevant levels of analysis; much less correctly identify their causal structure.

So, no, Google is not on the cusp of world domination. Google insiders’ ‘physics of clicks’ is best seen as intellectual bluster, not the harbinger of an automated society as Zuboff would have us believe.


Zuboff poses three fundamental questions throughout the book — Who knows? Who decides? Who decides who decides?

Who knows? Those on the right side of the one-way mirror have a clear asymmetric information advantage over those on the wrong side. But how much do they actually “know”? What they have is an ocean’s worth of surveillance data that they can probe with pattern recognition algorithms. This gives them some handle on broad patterns (ie they can build models and test them against the data) and they can examine the life world of “a person of interest” with a fine-toothed comb. But they can neither predict the future evolution of the broad patterns with much more certainty beyond what could already be done in the world of small data, nor can they predict the future behavior of the person of interest with greater confidence than a serious intelligence analyst without access to full-spectrum surveillance. So yes, we must recognize these new technologies of power and their potential for abuse. But let’s not pretend that they are all powerful. They are not: Data science is just fancy regression and ubiqutious surveillance is just fancy wire-tapping.

Who decides? Surveillance capitalists have laid claim to ownership of our vast digitally visible behavior. Just because they discovered the continent does not mean that they get to keep it. I second Zuboff’s call to make a sustained political effort to reverse the enclosure. What has been illegitimately dispossessed can indeed be repossessed. What we need is big push towards public oversight; one that goes beyond questions of monopoly and privacy and aims to essentially reverse the commodification of our behavior and ban automatic compliance (Zuboff’s “means of behavior modification”).

Who decides who decides? We can again agree with Zuboff that the discourse of “inevitability” is a transparent attempt at depoliticization. As she shows, other paths were indeed imagined; as when Google unveiled plans for Google Home where the householder was to have complete control over access to the data generated by connected devices. A different road was taken where your devices (“the Internet of Things”) are designed to generate surveillance data to be sold to those with a pecuniary interest in your future behavior. As her account shows, these decisions were taken unilaterally by technology firms under financial market pressure. They did not ask legal if it was compliant with the law and regulation. They just went ahead and did it. They got away with it, she argues, by moving at breakneck speed and under the radar; helped by the erosion of oversight institutions (especially the courts) under the neoliberal onslaught; and helped along too by the convergence of interests with the intelligence community after 9/11. It is now clear that this is politico-ethically unacceptable. Again, the onus is on us to mobilize politically to take back control of our digital footprint.

Zuboff mentions Andrew Tutt’s suggestion for an “FDA for algorithms” in passing. That is perhaps the single most constructive solution on offer and it is surprising to find her never returning to it. What is required is the incubation of a situated community of skilled software engineers and data scientists who can exercise effective oversight over their commercial counterparts in the public interest. An “FDA for algorithms” is precisely the right model for such an undertaking.

The Age of Surveillance Capitalism is compulsory reading. Zuboff has contributed greatly to our understanding of this rogue capitalism above all in identifying this specific regime of accumulation. So it is almost physically painful to discover that she has herself fallen for Silicon Valley’s Kool-Aid.

 

 

 

 

 

 

 

Standard