For the Grain: A Deep History of the Earliest State

James C. Scott’s Against the Grain: A Deep History of the Earliest States is a spirited and sophisticated attack on settled society, agriculturalism, and state formations. Scott’s wager is that all three developments were welfare reducing for the bulk of the populace. Sedentism, the concentration of hitherto dispersed people, animals, plants, insects, and microbes between 10,000 and 5,000 BCE, into what he calls “Late-Neolithic multi-species resettlement camps” was an epidemiological disaster (for the higher species). The domesticated beasts and no less domesticated men of these camps had higher mortality rates and were of lower stature (perhaps even had smaller brains) than their wandering and dispersed counterparts who continued to occupy most of the landmass of the Earth. Fixed-field farming, which arose some two thousand years after sedentism, was back-breaking work that generalist hunter-gatherers could not be persuaded to undertake without so to speak “a pistol at their temple.”[1] And grain states that emerged much later—the earliest emerged around 3,500 BCE in Babylonia (southern Mesopotamia)—were rapacious parasites; concerned above all with extracting as much surplus as possible from their hosts through taxation and forced labor.

The paleo-anarchist premises remain implicit throughout as Scott marshals evidence and argument against his three chosen targets. But in a blatant slight of hand, all of it is put on the state’s balance sheet. The problem is not superficial but deep-seated. He actually conflates the three.

A foundational question underlying state formation is how we (Homo sapiens sapiens) came to live amid the unprecedented concentrations of domesticated plants, animals, and people that characterize states.[2]

What Scott has written is a decentered counter-history—against civilization, against the state, against settled society, against cities, against villages, against farmers; and for the barbarian who lives in the shadow of settled society (on the outside; looking in) and the noble savage who lives without, untouched by civilization. Given the hegemony of the state discourse, one can understand the partisanship and one-sided marshaling of evidence of this counter-narrative—the debit side of the balance sheet of settled society remains empty. But he takes it really too far when he cherry picks the evidence that he marshals.

The world’s population in 10,000 BCE, according to one careful estimate, was roughly 4 million. A full five thousand years later, in 5,000 BCE, it had risen only to 5 million. This hardly represents a population explosion, despite the civilizational achievements of the Neolithic revolution: sedentism and agriculture. Over the subsequent five thousand years, by contrast, world population would grow twentyfold, to more than 100 million.[3]

Why did population stagnate between 10,000 and 5,000 BCE? Scott points to the epidemiological consequences of the late-Neolithic multi-species resettlement camps, making this “the most lethal period in human history.”[4] But that leaves the puzzle of the population explosion in the subsequent period, 5,000 BCE and 1 CE, that witnessed the rise primacy of complex societies all based on “the Neolithic grain complex.”[5]

The short answer, I believe, is sedentism itself. Despite general ill health and high infant and maternal mortality vis-à-vis hunters and gatherers, it turns out that sedentary agriculturalists also had unprecedentedly high rates of reproduction—enough to more than compensate for the also unprecedentedly high rates of mortality.[6]

But that makes no sense whatsoever. Sedentism cannot be marshaled to explain both the stagnation of the population in the first five thousand years after the Ice Age and its explosion in the next five thousand years. The answer is staring him in the face but he doesn’t want to see it.

Population exploded precisely because the carrying capacity of the planet increased considerably as a result of the adoption at scale of fixed-field cereal monoculture and the grain state that survived on taxing it. Grains could feed many more mouths than other strategies of subsistence. Populations could expand until all nearby arable land was brought under cultivation; and even after if yields could be raised further; subject only to ecological constraints—resource depletion, salinization, soil erosion and so on. Population growth was no doubt also enhanced by state formations that admitted dramatically greater security, order and capacity for social action; as well as interregional trade that yielded gains from economic specialization that pushed the economic possibility frontier further out. As we shall see, this took a concrete form in the very first civilization.

Not only is high civilization conspicuous by its absence in Against the Grain, so is the state itself. For a book that aims to take the state down a notch, it pays precious little attention to research on state formation. Scott recognizes that what needs to be explained is the emergence of pristine states.

My focus is almost entirely on Mesopotamia, and in particular the “southern alluvium” south of contemporary Basra. The reason for this focus is that this area between the Tigris and Euphrates (Sumer) was the heartland of the first “pristine” states in the world.…By far most of the evidence I bring to bear concerns the period from 4,000 until 2,000 BCE, as it is both the key period of state formation and the focus of the bulk of the existing scholarship.[7]

Despite Scott’s implicit claim to have mastered the existing scholarship, his latest references on the theory of primary state formation date from 1970. Since then there has been great progress in our understanding of primary state formation. In what follows we shall trace the achievements of the settlers all the way from the origins of settled society to the rise of the first grain state.

The Deep History of Permanent Settlement

Until the end of the Ice Age, around 10,000 BCE, all mankind lived in small mobile bands of hunter-gatherers. Sustenance was secured by hunting, fishing, and gathering plants and fruit. People lived in temporary camps, relocating frequently in pursuit of great herds of animals on the move, migrating flocks of birds and schools of fish, ripening fruit and wild cereals. The first extended periods of residence appear soon after the end of the Ice Age. They are located almost exclusively near sites of strongly differentiated terrain that admitted access to multiple sources of sustenance.

The difference between ecological units is, as a rule, shown in differences between land formations, the types of wild animals inhabiting an area, and the types of plants that grow there, with their varying times of ripening. A camp that sprang up on the borderline between such ecological units would have made it possible to use all the resources the different areas had to offer, either simultaneously or consecutively.…It is surely no accident that traces of early permanent or temporary human settlements in the Near East are found almost exclusively in areas with differentiated structures in the landscape, and there at sites from which there is the easiest possible access to as many different ecological units as possible.[8]

The early Neolithic semi-permanent and permanent settlements were necessarily isolated from each other because the hinterland necessary to subsist by food gathering was very large. The tyranny of distance forbid frequent direct contact between the settlements. We shall see how settlement geography played a decisive role in the emergence of complex society.

Around favored sites where food supply was especially abundant, diverse and secure, settlements finally acquired a degree of permanence. It is here that the excruciatingly slow process of plant and animal domestication could advance steadily. Grains that had hitherto been picked in the wild began to be planted and we finally begin to see evidence of food production with the remains of grain and food processing. Herding of animals also began to practiced early on as evident from the definite concentration of sex and age of the slaughtered animals of a particular type. These processes of domestication took thousands of years to master. Food production and animal husbandry provided a fragile and insecure food supply. Meanwhile, hunting and foraging continued to supply the bulk of the settlers sustenance.

For a very long period of time, therefore, it was essential to have multiple means of ensuring subsistence, and this brought in its train a mixed economy in which the shares of food production—whether by cultivation or animal husbandry—and of the procurement of food through hunting and gathering could shift according to the external circumstances prevailing. If worse came to worse, they could always go back to hunting, fishing, or gathering.[9]

What is clear is that by 6,000 BCE, all major cereals and legumes are being planted at a modest scale, and goats, sheep, pigs and cattle were domesticated. The full “Neolithic package” was coming into place at the favored sites of permanent settlements.

Settlement Geography and the Natural History of Social Complexity

The basic idea of settlement geography is straightforward. The size distribution and spatial pattern of settlements is the archaeological indicator of social complexity par excellence. These settlement patterns encode information about the state of technology, the division of labor, social organization and econopolitical interaction in the settlement system. The basic typology is (1) isolated settlements; (2) simple settlement systems; (3) three-tier systems; and (4) four-tier systems. When settlements are isolated from each other there is little everyday contact between the settlements—they do not constitute a system. Their isolation is the result of food production providing only a small fraction of the sustenance and the consequent need for a large hinterland. Division of labor in isolated settlements is extremely limited since specialist occupations require a large market. This is the pattern we find in the Near East circa 6,000 BCE.

With increasing mastery of the art of cultivation the area necessary to feed the settlement becomes smaller, the dependence of privileged sites is lessened, and with increasing security and more intensive land use, settlements can move closer and closer together. This is the basic prerequisite for the creation of structured relationships between settlements, ie the formation of settlement systems. In a simple settlement system, a number of settlements are subordinated to one big settlement. The communities living in the surrounding area are dependent on the “central functions” of the center—temples, warehouses, specialist craftsmen, chiefly administration, and so on—even as the center depends on its retinue of settlements to achieve specialization in urban functions. Three-tier systems are more complex settlement systems in which “towns” mediate between the “villages” on the one hand and the “city” on the other. Four-tier systems are yet more complex—it is here that pressures build up for state formation.


Typology of settlement systems. Source: Hans J Nissen.

Political authority for two-tier, three-tier, and four-tier systems take the form of simple, complex, and paramount chiefdoms respectively. Chiefdoms are polities in which all political authority is vested in the person of the chief—there is no functional differentiation of political authority. In a state on the other hand, authority is functionally differentiated—defense, policing, tax collection, civil administration et cetera, are performed by different public officials. In other words, in a state we have a differentiated bureaucracy exercising independent but partial authority. This matters greatly because chiefdoms cannot be scaled up while states can and often did. Beyond territorial expansion, the capacity for social action increases dramatically with state formation. The fundamental question of state formation then becomes: Why, and under what conditions, did some chiefdoms make the transition to statehood? The modern theory of primary state formation, that Scott studiously ignores, suggests that it was the struggle for survival with rival chiefdoms that compelled some chiefs and not others to split the atom of chiefly power.

In the Near East, the simple settlement systems of 6,000 BCE slowly evolved towards more complex forms over the next two thousand years. All along the Euphrates, as well as in Syria and Iran, settlements became packed together as food production was further mastered. Complex settlement systems allowed increasing division of labor. Development at this time was still regionally undifferentiated, with a curious exception. Babylonia lagged behind the plains of the northern Euphrates, western Iran and Syria in settlement complexity. Yet, as we shall see, it was here that the decisive breakthrough to state formation and high civilization would be achieved.

The Breakthrough to High Civilization and the Uruk World-System

In the middle of the fourth millennium BCE, a sudden climate shift made conditions cooler and dryer. There was a gradual drop in sea level and precipitation. From the moment that the sea began to recede (it would drop five meters over the next six thousand years), southern Mesopotamia was opened to much more extensive habitation. A prolonged period of an archipelago of isolated settlements was suddenly followed by a period in which the land was cleared so densely that nothing like it had ever been seen before.

[In] the period of early high civilization communal energies were evidently released to such an extent that speedy and comprehensive changes took place in all fields of life. This energy was so great that the rate of change increased rather than decreased in the following two to three hundred years during which a development took place in Babylonia that influenced the course of history more than enduring than many other.[10]

Instead of three-tiered systems that has hitherto been seen at the most advanced sites in the Near East, we finally see unambiguous evidence of a four-tiered settlement, with Uruk at the center. With an enclosed area of 5.5 square kilometers, Uruk was more than twice as large as Athens at is peak in the classical age. Even Rome at the very peak of the great second century boom was only twice its size. And this was thirty centuries before Rome.

city sizes

Source: Hans J Nissen, The Early History of the Ancient Near East, 9000-2000 B.C.

In the “Late Uruk” period, for the first time in history, we find unambiguous archaeological evidence of writing, cylinder seals, large-scale works of art, monumental architecture, and increasingly, big canals and irrigation. The record suggests a highly differentiated public administration and a dramatically enhanced capacity for social action. But Uruk was not just a strong state. There is considerable evidence of an increasingly sophisticated division of labor, unprecedented technical innovation, and long-distance trade covering the entire Near East with the attendant international division of labor. Uruk, the latecomer, would not only out-innovate the regional incumbents, it would set the standards that everyone else would try to replicate or risk getting stuck in a state of dependency on the leader.

Uruk map.png

The Uruk World-System

Susiana in southwestern Iran was considerably more developed when the Babylonian breakthrough began. But it was soon left in the dust.

Unlike the development in Babylonia, where a slow change took place from the painted pottery of the late Ubaid period to the unpainted pottery—made on the potter’s wheel—of the Uruk period, with some repetition and parallel developments in the intermediate phase, in Susiana, the richly painted pottery of the Late Susiana phase was followed directly—with no noticeable transition—by unpainted pottery thrown on the wheel, of the sort we know from the Uruk period in Babylonia. This observation suggests that this clearly recognizable pottery was developed in Babylonia and taken over by Susiana. This statement can be expanded. In Susiana, it was not only the special way of producing pottery that was taken over, but also almost all the other developments we have learned of in Babylonia.…From the sort of rapid changes that took place in Babylonia the conclusion can quite unambiguously be drawn that the aim was to make writing more readily usable, so that the range of uses for writing obviously became larger. However in Susiana the form of writing remains remarkably static.…It is almost superfluous to point out that changes in building techniques because of the introduction of the plano-convex brick…also did not occur.…In Babylonia a culture was constantly expanding both internally and in relation to the outside world. [Initially] the civilization of the Late Uruk period could be taken over almost in its entirety by Susiana….[But] with the sharp pace of change in Babylonia, the time soon had to come when new organizational structures created there no longer had any meaning for the management of living conditions in Susiana, because of differences in scale.[11]

Upstream from the Euphrates, in northern Mesopotamia and Syria, the relationship became more unequal still.

In a completely independent local development, individual settlements were founded that are absolutely identical with what we know from Babylonia and Susiana, down to the last pottery shred in the inventory. Communication, which must have taken place in some way, can only be detected to a limited extent in the inventory of objects found in the Syrian settlements. There does not seem to have been any traffic in the opposite direction. If, in addition, we consider that these alien types of settlements were all either directly on the Euphrates or on its tributaries, there seems to be a relatively simple explanation for the whole situation. We are most probably dealing here with settlements established by people who came there directly from the southern lowland plains. Without doubt, the securing of trade interests had a part to play here—the Euphrates and its tributaries had always been the preferred trade routes—and no other motivation has been revealed to us.[12]

Put bluntly, the Babylonian settlements in Syria were colonies. Although they lasted for only a few decades, they were built to last. Nor was the Babylonian colonial presence restricted to Syria. 

The expansion of Babylonian civilization along the Euphrates to the north, which led to the establishment of the settlements on Syrian territory, did not come to a halt at the borders of the great plain. It then spread along the course of the river into the mountain regions, where the type of understanding arrived at with the local civilization was clearly different from that in the area of the Syrian plain.…As in the Zagros area, we encounter relationships between finds here that are clearly not local and unambiguously show influences from the sphere of the Late Uruk period civilization.

Was this expansion a simple case of imperial predation? Or was there in fact mutual gains from exchange and interaction? Surely, both must have been in play.

Whether we talk of “expansion” or “attraction,” however, we shall obviously not get any closer to an explanation if we see only either trade or imperialist expansion behind this influence, or even if we assume that it was a quasi-inadvertent expansion. It is possible that two factors went hand in hand in this development. One the one hand, there was the fascination on the part of the “underdeveloped” areas when faced with the complex way of life and the knowledge of Babylonia; on the other, Babylonia needed to organize the import of raw materials, and perhaps also the export of manufactured goods, to satisfy a rapidly expanding internal economy.…The fact that from the east across the whole northern area and into the west, all the neighboring regions were in one way or another incorporated by Babylonia either directly or indirectly into a network of relationships stronger than there had ever been before shows that this extension of influence was not aimed directly at one region, but spread out in all directions.[13]

We don’t know if the Babylonians relied on a superior economy of weapons to maintain their presence and whether there was armed resistance by the “colonized,” or whether the magnetism of Babylonian culture was enough to seduce the natives. What we do know is that the inhabitants of Babylonia regarded everyone else as barbarians.

Barbarians and Civilization

Babylonia remained in the core of the central civilization long after Uruk’s fall from primacy. The city itself remained in continuous occupation for forty centuries. The Epic of Gilgamesh, the world’s first epic, was composed at Uruk. Other Babylonian city-states—Ur, Lagash, Kish, Nippur—became strong and they competed for leadership. Babylonia itself was briefly unified by Sargon of Akkad in what was the first territorial state in the Near East. (Egypt had already been unified into a territorial state at the time of Uruk’s flowering.) But it too shrank back to a city-state within three generations. Great territorial states arose outside Babylonia that would come to dominate the Near East in the Bronze Age. But the flame of civilization never went out. Following in the footsteps of the central civilization even more vibrant high civilizations arose all along the Eurasian rim—Rome, Greece, Egypt, Persia, India, China, and beyond. But well into modern times, perhaps even until the late-19th century, most of the Earth’s landmass lay outside the zone of civilization.

On the edges of settled societies, and in mountain redoubts and suchlike within them, were frontier-zones or counter-zones. These ungoverned spaces within and without offered shelter from the tax collector and provided refuge to those who rejected or were rejected by the dominant culture. The nonstate peoples in the frontier-zones were not savages; they were barbarians. They preyed on settled society; traded with it; resisted it’s diktat; and, if they became unified and strong, mounted invasions in a bid to capture the state. The agrarian states traded with them, tried to suppress them, and if punitive expeditions failed, bribed them to keep the peace. All civilized societies had to deal with ‘inner barbarians’—nomads, gypsies, mountain peoples, and so on—and ‘outer barbarians’ who lived in the frontier-zones proper.

Barbarians must be thought of as counter-people. They exist only in symbiosis with or as parasites on settled society. Forest dwellers far removed from state societies are not barbarians. Barbarians begin when taxes stop. They come into history as counter-people in the state discourse; as security threats. Studying the frontier-zones is quite interesting and most definitely worthwhile. We should certainly pay more attention to it and Scott is right to bring our attention to it. But then he says:

The longer states existed, the more refugees they disgorged to the periphery.…The process of secondary primitivism, or what might be called “going over to the barbarians,” is far more common than any of the standard civilizational narratives allow for.…A great many barbarians, then, were not primitives who had stayed or been left behind but rather political and economic refugees who had fled to the periphery to escape state-induced poverty, taxes, bondage, and war. As states proliferated and grew over time, they ground out ever greater numbers who voted with their feet.[14]

Here again he is working strenuously against the grain of the evidence. The case he marshals shows instead that it was precisely instability in state society, and especially the breakdown of state power, that generated exodus. But how can the devastation unleashed by the breakdown of state power be blamed instead on state power?

Causes for flight varied enormously—epidemics, crop failures, floods, salinization, taxes, war, and conscription—provoking both a steady leakage and occasionally a mass exodus.…It is particularly pronounced at times of state breakdown or interregna marked by war, epidemics, and environmental deterioration.[15]

Yes, even in times of stability, people did leave state society to go join the barbarians in the frontier-zones. But this was a mere trickle of outlaws, social rejects, pariahs, and rebels. When state societies provided law and order, net migration was almost certainly inward as people left the periphery for the core, to make their mark in the city. People at the edge of the Roman world could not wait to eat that awful fish sauce, don the toga, and take baths in public. Make no mistake: When state societies flourished their magnetism was considerable. This was as true of Babylonia fifty centuries ago as of the present-day United States.


What we have in Scott is what superficially looks like an impressive case against state society but is in reality a form of one-sided counter-history masquerading as a scholarly work by a Harvard Professor who could not exist for a moment without state society. Yes, the path to civilization was brutal. All the more reason then to celebrate the pioneers of the Neolithic and the earliest state societies starting with Babylonia.


[1] Moore, Hillman, and Legge, Village on the Euphrates, p. 393. Quoted in Against the Grain.

[2] Against the Grain, p. 33.

[3] Ibid, p. 159.

[4] Ibid, p. 160.

[5] Ibid, p. 181.

[6] Ibid, p. 183.

[7] Ibid, p. 17.

[8] Hans J. Nissen, The Early History of the Ancient Near East, 9000-2000 B.C.

[9] Ibid.

[10] Ibid.

[11] Ibid.

[12] Ibid.

[13] Ibid.

[14] Against the Grain, p. 343-345.

[15] Ibid, p. 342.


The Restoration of the Corporate Profit Share

In the exchange with Brenner, we were talking about profit rates which are defined as the ratio of profit to capital stock. In what follows we will document the empirical evidence for another measure of corporate profitability, the profit share, ie the ratio of corporate profits to GDP. This alternate metric is useful because measures of capital stock are highly sensitive to methodology, especially the treatment of depreciation. The profit share is not the answer to, How profitable are US firms? It is the right answer to, What portion of national income ends up in the coffers of US corporations?


Figure 1. Pretax profit of US firms, 1935-2016.

Figure 1 displays the ratio of pretax profit of all US corporations since 1935. We see that the profit share was very low during the great depression, rose mightily after the US entry into World War II, largely stayed in double digits until 1969, and fell dramatically thereafter. The profit share was restored partially in the mid-nineties, and much more robustly by the mid-2000s. Since the GFC, the profit share has been close to its historic peak.

Brenner would argue that financial sector profits are an artifact of asset price booms; that we should be looking at the profits of nonfinancial firms. Figure 2 displays the profit share of US nonfinancial corporations. We see that their profit share has also been restored to the postwar level of around 8 percent of GDP.


Figure 2. Profit share of US nonfinancial firms.

The complement of Figure 2 is the profit share of the financial sector, here operationalized as FIRE (finance, insurance, and real-estate). Figure 3 displays the financial sector share. Financial profits are at their highest level in the postwar period; about 0.5 percent of GDP, roughly $100bn, higher than the average of the postwar period; largely by central bank design—aimed to strengthen the balance sheets of US financial institutions. Note the dramatic crash during the GFC.


Figure 3. Pretax profit share of US FIRE sector.

Figure 4 displays the effective tax rate on US corporations, obtained by dividing the difference between pretax and post-tax profit by the former. We see that the effective corporate tax rate rose dramatically in World War II, stayed above 40% in the fifties, and has been stepping down since. Remarkably, it is now back at levels last seen before the war.


Figure 4. Effective corporate tax rate for US firms.

Unsurprisingly, this has led to a dramatic revival in the post-tax profit share. See Figure 5. After-tax profit share is now about 3-4% of GDP higher than that prevailing in most of the 20th century. US GDP is about $18.5 trillion, so corporations are pulling in around $600-700 billion more than they ever did.


Figure 5. After-tax profit share.

What have the firms done with all this cash? Figure 6 displays the distributed earnings as a percentage of after-tax profits. Whereas in the postwar era, firms retained around 60 percent, in the neoliberal era, they have disgorged around 60 percent of their earnings to investors in the form dividends, and increasingly, share buybacks.


Figure 6. Distributed profits.

Figure 7 displays the decomposition of US corporate profits into broad sectors. Nonfinancial Nonmanufacturing is the residual category, largely made up of services and extractive industries; FIRE is finance, insurance, and real-estate; ROW is receipts from abroad in gross terms, ie we have not deducted the profits earned by foreign firms. A number of observations are in order.


Figure 7. Corporate profit decomposition.

First, the share of manufacturers declined steadily from around 6 percent in the fifties to about 2 percent of GDP by the 1980s. It has since fluctuated around the 2 percent level. Some of the decline in the manufacturing share is definitely real; profitability was never really restored in the great mid-century rust-belt industries. But some of it is only apparent. Due on the one hand to servicification, whereby value-added that had hitherto been embodied in the manufactured product is now provided as a service. And on the other, to Baldwin’s “second unbundling” whereby firms relocate production processes offshore, often within a few hours flying distance. In the former case, the same profit ends up in the nonfinancial nonmanufacturing sector. In the latter case, it shows up as receipts from abroad (the ROW sector).

Second, whether or not due to the supply chain revolution, receipts from abroad now amount to 4 percent of GDP. That comes to more than $600 billion a year. Since the GFC, US firms have booked slightly more than 5 trillion dollars of profit earned overseas. In the same period, manufacturers’ profit came to $2.85tn and that for FIRE to $2.35tn. So the rest of the world is roughly as big as finance, insurance, real-estate and all of US manufacturing put together. This is of signal importance to the political economy of the United States.

Third, services and industries that make up the residual nonfinancial nonmanufacturing sector seems to have become the dominant sector in the economy. Yet, unpacking it one is hard put to find an actual sector of sufficient mass. Figure 8 displays some sectors of interest. We choose sectors with an eye to US political economy. “Oil-based” includes all sectors heavily reliant on fossil fuels—oil and gas exploration, petroleum products, chemicals and plastics; “Tech” includes computer design and manufacture, information processing, media and entertainment; “Finance” includes only securities and credit intermediation; “Trade” includes wholesale trade, retail trade, warehousing and transportation. We sum up the profits of each of these sectors for two three-year late-cycle periods, 2004-2006 and 2012-2014.


Figure 8. After-tax profits of selected sectors.

The dominance of multinationals and finance is manifest. Receipts from abroad alone amounted to slightly more than $2tn in 2012-2014, dwarfing the $155bn in healthcare, $340bn in Tech, $375bn in oil-based, and $616bn in trade. It is more than twice as big as US manufacturing whose profits in 2012-2014 were $910bn. At $1.5 trillion, only finance can compete with the multinationals.

In the frame of the investment theory of party competition, I posited that finance and multinational firms have congruent interests and their political alliance constitutes a hegemonic bloc of investors in the system of 1980; and that this was the source of the stability of the neoliberal consensus. The evidence suggests that we should think of multinational firms as the stronger party in that alliance.




Class Reproduction in America

I attended the Intelligence Squared debate on America’s economic outlook. In that debate I heard something that stayed with me, but the significance of which I have only now come to appreciate. Asked if he agreed with the motion, “The GOP Tax Reform Bill Will Improve Our Outlook for Growth,” Moore, the architect of the bill, responded in the affirmative. Shaking both his arms vigorously, he declared:

I agree with these two gentlemen that capital is the name of the game. Whoever has the most capital wins.

Stephen Moore, a card-carrying free-market fundamentalist, has been associated with the Heritage Foundation and the Cato Institute. He has even written for the Weekly Standard and the National Review. But the “two gentlemen” who he said agreed with him, and who did not contest the claim despite ample opportunities, were Simon Johnson and Jason Furman. Johnson, an economics professor at MIT known for his hard-hitting work on inequality and political capture by Wall Street, received the Main Street Hero Award. Furman, the “wonkiest wonk” in the Obama White House who helped draft the Affordable Care Act, was Al Gore’s economic policy director and served in the Clinton administration. That these two card-carrying progressive liberal economists agree with Moore that “whoever has the most capital wins” is, as we shall see, highly significant.

We saw in the previous post that parental income is a poor predictor of kids income. This is because the controlling variable for an individual’s lifetime earnings is the prestige of the school they attended and parental income has very little effect on the kid’s odds of getting into a prestige school. The upper class, defined as high income families, is constantly being invaded by the smart kids of the hoi polloi. Parental income does have an economically significant effect on kids income after controlling for school prestige. But it is secondary and gets swamped by the influx from below. In other words, the upper class, defined as the top blah percent of income, does not have a viable mechanism to reproduce itself.

The problem is that we have an ill-defined notion of the upper class. We must define the upper class not in terms of income but rather in terms of wealth. For it is not high income but wealthy families who can reproduce their advantages generation after generation. Note that wealth is “twice” as concentrated as income in the sense that the Pareto decay parameter is roughly half as large. Figure 1 shows the share of the top 1 and top 10 percent for income and wealth in the United States. Note the wealth crash in 1966-1979. We are back to high classical levels of income inequality and closing the gap in terms of wealth inequality. Note also the dramatic peaks of top wealth shares during the late-twenties wealth boom.


Figure 1. Top wealth and income shares in the United States. (Source: WID)

Pfeffer and Killewald find that not only is parental wealth a strong predictor of kids wealth, grandparent wealth is also a significant predictor of grand kids wealth (even after controlling for parental wealth). Because wealth may “skip” generations, 2-generation dyadic correlations do not fully capture the inherited advantage. Using an inter-generational study, they find an elasticity of 0.44 for log net worth after controlling for age-effects (ie, doubling log parental wealth increases kids wealth by 44%) and 0.39 for net worth percentile.

In Figure 2, we replicate their baseline result (without controlling for age-effects). On the X axis we have self-reported parental wealth in 1989 and on the Y axis we have self-reported wealth of their kids in 2015. We find a robust elasticity of 0.46.


Figure 2. The net worth of parents is a strong predictor of the net worth of kids.

In Figure 3, we replicate their second result. On the X axis we have self-reported grandparents net worth in 1989 and on the Y axis we have self-reported wealth of their grandkids in 2015. Again, we find a robust elasticity of 0.28.


Figure 3. The net worth of grandparents is a strong predictor of the net worth of grand-kids.

The results on wealth and income are in considerable tension with each other. Individuals not born in wealth but earning high labor incomes can save much of their high earnings, thereby accumulating wealth over their lifetimes. This is the source of entropy facing the upper class defined in terms of wealth. On the other hand, high rates on return on wealth allows the wealthy to pull further away from the pack. In this struggle for class reproduction, who has the upper hand?

In order to interrogate the struggle between entropy and the upper class, ie in order to ascertain the failure rate of wealth-holding families to reproduce themselves, we must read Piketty’s numbers in an entirely new light. Piketty and Zucman have shown that the controlling variable is the difference between the growth rate of income and the rate of return on wealth.

…our three inter-related sets of ratios—the wealth-income ratio, the concentration of wealth, and the share of inherited wealth—all tend to take higher steady-state values when the long-run growth rate is lower or when the net-of-tax rate of return is higher. In particular, a higher r-g tends to magnify steady-state wealth inequalities. We argue that these theoretical predictions are broadly consistent with both the time-series and the cross-country evidence.

It can be derived from accounting identities that the share of inherited wealth in the stock of total wealth is a function of the histories of the rate of wealth transmission (“rate”), the labor share, the savings rate and the difference between r and g (all of whom are time-varying) as follows.


Integral representation of inherited wealth share.

This form of a function is called an integral representation. Intuitively, the numerator is inherited wealth, ie wealth transmitted to progeny, while the denominator is total net worth, ie the sum of inherited wealth and earned wealth. The share of inherited wealth is an increasing function of the rate of transmission and the “excess” rate of accumulation, r-g. And it is a decreasing function of the labor share and the savings rate. Inherited wealth is obtained by integrating/summing up the rate of transmission and the product of the labor share and the saving rate with exponential weights with exponential “parameter” r-g, which therefore emerges as the controlling variable.

Beyond r-g, the rate of wealth transmission emerges as a key social-demographic variable. It is a product of the ratio of the wealth of the dying (corrected for gifts already made to progeny) to average wealth and the mortality rate. Intuitively: Suppose that there are no gifts; only estates. The rate at which wealth is being passed on depends on the rate at which wealth-holders are dying and their wealth relative to the living.


The rate of wealth transmission is a function of social norms of gift/bequest and savings behavior. If people largely save for old age then they will exhaust most of their savings and the rate of transmission will be low. If the bequest motive is strong on the other hand, wealth will be transmitted at higher rates. Of course, we must correct for gifts made to progeny before death since there is substantial evidence that gifts are becoming an increasingly major conduit of wealth transmission.

At this point I am going suppress my desire to share all of their graphs. You can find them at the end of the paper on page 61. I will share just three more. First, the inheritance flow/rate of wealth transmission in France where the data allows quite precise measurements over the long run with the understanding that the “U” shaped curve is a common property of major Western countries.

Inheritence flow

And second, the share of inherited wealth in the stock of total wealth in France. We see that the share of inherited wealth is closing in on 70 percent compared to the antebellum peak of 90 percent in 1910. Piketty and Zucman report simulations that suggest that France and other advanced countries will be able to return to prewar levels towards the end of the 21st century.


And third, the share of inherited wealth in Europe and the United States. In the US, the wealth of the dead relative to the living has surpassed the 1930 peak of more than 60 percent. In Europe, the share fell steadily from the 1910 peak of more than 70 percent all the down to less than 40 percent in 1980, and is now at levels last prevailing in 1950, around 55 percent. 28337127_10159994793235287_5517412124084101504_o

Time now to circle back to our question of who has the upper hand? The foregoing analysis suggests that the answer is both broadly political and historically-specific. None of these numbers is a coincidence. All the drivers identified by Piketty and Zucman—rates of return, capital/labor share, low savings rate, and perhaps even low growth rates—can be traced to the neoliberal counterrevolution. Above all, the sociopolitical transformation that is implicated by this analysis is a regime that maximizes the rate of return on capital—whoever has the most capital wins. This was not the case in the corporate-liberal synthesis of the postwar era; a regime geared to maximize economic output, not wealth.

What fueled this sociopolitical transformation that we call the neoliberal counterrevolution was the wealth crash of 1966-1979 that we noted earlier. It all started offshore. Burn has shown how the escape of capital from center country control punctured the quasi-public international financial order, whereupon speculative attacks forced the center countries to relinquish control of capital flows and the hard-currency exchange rates. Back home, the stagflation crisis provided the opening for a spectacular attack against Keynesianism, monetary subservience, and the unions. The counterrevolution was consummated in November 1979, Duménil and Lévy argued, when, with the dollar plunging, Carter appointed Paul Volcker to the Fed after panicked conversations with New York bankers over the weekend. Interest rates were deregulated with the passage of the DIDMCA in 1980 by a Democratic Congress and signed into law by a Democratic President; thus effectively ending financial repression. Henceforth, instead of counter-cyclical fiscal policy aimed at maximizing output and employment, we would have monetary management of the macroeconomy geared to fighting inflation and maximizing wealth; instead of “a re-industrialization of America” (as the title of a special edition of BusinessWeek demanded in 1980) and the revitalization of the great industrial corporations (that underwrote the Treaty of Detroit whereby labor shared in productivity growth), firms would be compelled to maximize shareholder value; instead of high taxes on capital and high incomes, we would have increasingly reactionary tax cuts; and above all, instead of financial repression we would have unfettered finance.

We have seen that income and wealth are implicated in three interrelated ways. First, in the definition of the upper class. We have seen how defining it in terms of the flow of income is no good since such a class does not have a viable mode of reproduction in modern America. We must instead define the upper class in terms of the stock of wealth since with that definition we have a concrete mode of class reproduction. Second, we have seen Piketty’s r-g, the difference between the growth rates of wealth and income, in a new light. It is the controlling variable for the inter-generational survival rate of the properly defined upper class. Third, income (GDP) and wealth (capital) map onto two very different modes of postwar political economy. In the postwar era, the entire system was geared to maximize the former. In the neoliberal era, the focus of the entire system has imperceptibly shifted to maximize the latter.

At issue also are the rigidities of the discourse on public policy, including the purely civil society discourse of economics. For why is there no disagreement, indeed violent disagreement, between the three gentlemen? There is no compelling macroeconomic reason why public policy—both fiscal and monetary—should be geared to maximize wealth/return/shareholder value instead of output and employment (or for that matter median income or blue collar wages). What we have instead is a political consensus forged and upheld by holders of capital.


Prestige Schools and the Big Sort

I suggested in the previous post that America’s elite reproduces itself through privileged access to higher education. Digging deeper into it, I realize that that is not quite what’s up. Parental income is not in fact a strong correlate of the kid’s income. See Figure 1. (We are using the same spreadsheet as the previous post.)


Figure 1. Parental income and kids’ income.

Something quite different is going on. Figure 2 reproduces the graph in Figure 1, but with the school tier identified. What I really like about this graph is that it makes abundantly clear precisely what is going on. The prestige of the university you attend largely determines your earnings. This is the big sort.


Figure 2. Prestige schools, parental income and earnings.

Figure 3 displays the mean earnings for different college tiers. The prestige university premium is the difference in earnings between graduates of prestige schools and those of non-prestige schools. Taking non-selective schools as the baseline, the premium ranges from $13,334 for selective schools, $28,278 for highly selective schools, $45,371 for other elite schools, and $84,803 for Ivy Plus. On the other side of the great divide, taking never attenders as the baseline, attending late adds $3,299, attending for less than two years adds $4,515, attending a two-year program adds $12,772, and attending a non-selective 4-year college adds $12,867. The prestige school premium is dramatically larger than the college premium.


Figure 3. Prestige University Premium.

My previous post suggested that rich households enjoy privileged access to elite schools. Yet the evidence for that is thin. Figure 4 shows that while school tier is an excellent predictor of income, parental income is a poor predictor of school tier.


Figure 4. School tier, parental income, and kids incomes.

Just to be sure, I tried fitting a multinomial model. Specifically, I fit an ordinal probit model that posits that the probability that you end up in one of n boxes depends on a unobserved latent variable as follows.


Figure 5 displays the fitted probabilities of getting into prestige schools for selected parental income percentiles.


Figure 5. Parental income barely affects the odds of the kid getting into top schools.

The probit model has very poor fit; meaning that parental income is indeed a poor predictor of the odds. The basic reason is that the odds decay really fast; especially for the top tiers. Because the model is looking for the gradient across the parental income distribution, the relatively high rates at the very top end get barely picked up. Even so, the coefficients are what you’d expect and they are significant. (There is indeed nothing wrong with the probit model itself. See postscript below.)

Our interpretation is that for the bulk of population parental income has little effect on the kids’ odds of getting into prestige schools. Whence the posited channel of class reproduction—parental income increases the odds of the kid getting into a prestige school—is empirically indefensible.

This of course does not mean that parents do not pass on their advantages to their children. They do. But not through the channel we posited. Specifically, parental income becomes a good predictor of kids’ income after we control for school tier. Figure 6 shows that this effect is significant for all school tiers. And that it is much more significant for lower tier than for higher tiers.


Figure 6. Effect of parental income on kids’ income after controlling for school tier.

We also run a full model for kids’ income with parental income, dummies for school tiers and percent unemployed, we find all variables to be significant and obtain an excellent fit.  Figure 7 displays the scatter plot.


Figure 7. The full model.

To sum up: There are two basic channels through which parental advantages can be passed on to their children. (1) Children of richer parents can get into prestige schools at higher rates. (2) Parental privilege can help children achieve higher earnings at the margin given their school tier. Somewhat counter-intuitively, the empirical evidence is very weak for (1) and very strong for (2). But neither of these is decisive. In the big picture, the university you went to trumps everything else. In other words, what we are seeing is consistent with the big sort, not neofeudalism.

The evidence can be read as supporting a triumphalist narrative of America as a great meritocracy. Or it can be read as suggesting an intensification of the neoliberal condition. For what it suggests above all is the picture of 18-year-olds engaged in brutal market-like competition—a fight to the death.

Postscript. Petey Bee’s comment got me worry about the probit model. I realized that there was an easy way to check that I wasn’t missing something: Replace parental income with kids’ earnings. If the results look reasonable, we are alright. The following figure display the fitted probabilities by earnings percentile for all school tiers. They all look kosher. Really stunning how you are more or less guaranteed to have gone to an Ivy Plus school if you are in the top 5 percent of earners.


Post-postscript. Ron and Jaime, personal friends and smart scientists both, pointed out the incredible separability of Figure 2. It does look “fishy”. Why? There are two reasons I think for the incredible separability of the data. The first is, I think, that the data is coarse. “There is one row for each parent percentile [101] and college tier [15]” for a total of 1,515 observations. So there has already been averaging within percentile-tiers. The raw underlying data would no doubt look messier. The second, I think, is my fault. I merged some nearby tiers to gain clarity. Here is the offending graph for the original tiers.


Figure 2 with original 15 school tiers.

I apologize. I should’ve been clearer. That being said, there is nothing nefarious involved in either Chetty et al.’s decision to average the data at the percentile-tier resolution or my decision to merge certain categories. If one were to use raw data and the original tiers, it would look considerably messier, but the systematic differential would not disappear and the results above would no doubt still hold.



Parental Income and College Enrollment in America

Tip of the hat to Adam Tooze for flagging a every interesting data set on the nexus between parental incomes and college enrollment from the Equality of Opportunity Project. Piketty shared an astonishing graph that shows that college enrollment rates are literally a linear function of parental income. Figure 1 reproduces that graph. All data displayed below is from this spreadsheet.


Figure 1. Enrollment rates by parental income.

The data set allows us to dig much deeper. Figure 2 recomputes the same graph as Figure 1 but for enrollment rates in 4-year colleges. The percentages drop rapidly. Whereas 80 percent of the 80-percenters’ kids are enrolled in some college; only 50 percent are enrolled in 4-year degree programs. This is very significant as most white collar jobs require a 4-year college degree.


Figure 2. Enrollment in 4-year colleges and parental income.

Figure 3 displays enrollment in public or private “selective” universities. Clearly, the authors’ definition of selective is not very selective. Presumably these are universities where one is not guaranteed to get in.


Figure 3. Enrollment in selective universities and parental income.

Figure 4 displays the same graph for “highly selective” schools. These can be more properly deemed to be selective. Only 10 percent of the 80-percenters’ kids get in here.


Figure 4. Enrollment in highly selective universities and parental income.

Figure 5 displays the same for “elite” universities. These are highly ranked schools; probably making the list of top 30 or 50.


Figure 5. Enrollment in elite universities and parental income.

Figure 6 displays the same graph for “Ivy Plus” schools. A potential list of these schools can be found here—although there is no guarantee that the mapping is one-to-one and onto. Chetty et al. define Ivy Plus as the Ivies plus Stanford, MIT, Chicago and Duke. 


Figure 5. Enrollment in Ivy Plus universities and parental income.

It is hard to compare these five graphs because they have very different scales. In order to compare them we log transform the data. Figure 6 displays them all together on a log scale. Here’s how to interpret it: exponential growth looks linear on the log scale; if it grows faster than linearly on this scale, it means that the raw series is super-exponential—it grows very, very rapidly (which here means access is much more unequal). So what this graph says then is that access to prestige schools is largely restricted to the upper class.


Figure 6. All together now.

Table 1 displays enrollment rates in elite schools for selected parental income percentiles. Less than 2 percent of the kids of 75-percenters get into elite schools compared to more than 8 percent of the 95-percenters and some 14 percent of the 98-percenters.

Table 1. Enrollment in elite schools by parental income.
Parental income percentile Percent in elite schools
25 0.64%
50 1.05%
75 1.85%
80 2.35%
90 4.48%
95 8.25%
98 13.96%

These numbers dramatically underscore how higher education functions as a mode of class reproduction in modern America. We are very far indeed from the land of opportunity.


Why Did the Soviet Union Commit Suicide? Part I: Economic Stagnation

This is the first in a series of posts about the Soviet capitulation.

‘We will bury you,’ Khrushchev thundered from his UN podium in September 1960. At the time the promise seemed entirely credible. As Maddison’s data would later reveal, Soviet per capita GDP had grown at 3.3 percent in the 1950s; compared to 1.7 percent in the United States. In 1957, the Soviets had put the world’s first satellite into space and finally acquired ICBMs capable of striking American cities. Strategic parity with the United States was on the horizon. In 1956, the Hungarian uprising had been crushed and total Soviet domination of Eastern Europe assured. Communist China remained firmly in the Soviet bloc. And in 1959, the Cuban revolution had delivered a reliable ally 90 miles from the eastern seaboard of the United States.

A generation later, Soviet confidence had vanished. The Soviets had indeed achieved strategic parity; although it took a full decade. Yet, as soon as parity had been achieved Soviet economic growth ground to a halt. The stagnation made the military-fiscal burden increasingly unsustainable. Figure 1 displays the 10-year moving average of change in the natural log of Soviet per capita GDP. (So each observation captures growth in per capita income over the previous decade.)


Figure 1. Soviet per capita GDP growth. (Source: Maddison)

As you can see, Soviet economic growth fell off a cliff after 1973. In 1950-1973, Soviet per capita income had grown at a remarkable 3.29 percent per annum. In 1974-1985, the command-shadow economy clocked a measly 0.85 percent. Table 1 compares the Soviet performance in two periods with the US, West Germany, and France.

Table 1. Real per capita GDP growth
(Source: Maddison)
Soviet Union United States France West Germany
1950-1973 3.3% 2.4% 3.9% 4.9%
1974-1985 0.9% 1.8% 1.6% 2.0%

The slowdown was, of course, far from restricted to the Soviet Union. It was across the board. But nowhere was the stagnation quite so pronounced. Even at the peak of the postwar boom, the Soviet Union had lagged behind Western Europe; especially Germany. Now it even fell behind the United States. In the 12 years before Gorbachev’s ascent, Soviet income per head grew at half the pace of the United States. Figure 2 displays the performance of the four powers.


Figure 2. Per capita income growth in the Soviet Union, the United States, France and West Germany. (Source: Maddison)

Let’s briefly note the Soviet postwar achievement in absolute terms. Since we are using 1990 international dollars, we can benchmark Soviet numbers against the World Bank’s classification from 1990. In 1950, Soviet per capita income was $2,841, right above the World Bank’s Upper Middle Income threshold. In 1960, when Khrushchev made his promise, it had risen to $3,945; by 1973, it had reached $6,059; within striking distance of the High Income level. Khrushchev had essentially promised the achievement of developed country status by 1980. If we use the 1989 thresholds, the Soviets had already achieved High Income status in 1973. But it is interesting to note that, had growth continued at the pace achieved in 1950-1973 for just seven more years, Soviet per capita income in 1980 would’ve been $7,634—above the World Bank’s 1990 threshold for a High Income country in 1990. But that was not to be. Soviet growth ground to a halt. In 1985 Soviet per capita income was $6,708.

Table 2. World Bank Classification Thresholds
1989 1990 2016
Lower Middle Income $580 $610 $1,006
Upper Middle Income $2,336 $2,465 $3,956
High Income $6,000 $7,620 $12,235

I am making such a fuss about this because I want to make a number of observations about the mid-1980s conjuncture. First, the Soviet Union was not anywhere close to being a poor country. Indeed, not only was it nearly high income, since it was dramatically more egalitarian than capitalist nations at a similar per capita income level, Soviet citizens were arguably better off than those of the latter. Second, the Stalinist system of the command-shadow economy was consolidated in the 1930s and remained virtually unchanged until Gorbachev’s reforms. That system had delivered extraordinarily high growth in the 1930s, enabled the Soviets to defeat the mighty Wehrmacht, and again delivered very high rates of growth for thirty years after the war. Any notion that it was intrinsically flawed cannot stand up to this evidence. Third, the slowdown of the 1970s characterized the entire world economy. It cannot, therefore, be entirely blamed on the ossification of the Soviet system. It was most likely the result of the exhaustion of the century of unprecedented technological advance of 1870-1970, as Robert J. Gordon argued in The Rise and Fall of American Growth

Having made these points, I’m going to walk them all back a little bit. First, in order to achieve strategic parity with the United States, the Soviets devoted an extraordinarily high proportion—roughly half—of their economy to defense and capital goods. This meant that Soviet consumers were worse off than they would’ve been with a more balanced economy. Second, the Stalinist command-shadow economy may have been less capable of dealing with a major transformation in industrial affairs. As Kotkin notes in Armageddon Averted: The Soviet Collapse, 1970-2000, the problem of obsolescence of machinery and overcapacity in the “rust-belt” industries affected all industrial nations at much the same time. But while capitalist nations managed to at least partially solve it by developing new regimes of accumulation, the Stalinist factory-based welfare and production system may have been particularly unsuited to the challenge. Third, as already noted, nowhere was the stagnation quite as pronounced as in the Soviet Union.

The bottom-line is that Soviet economic stagnation was very real. Something had to be done. Perhaps something drastic. Reforms were indeed needed. But they need not have taken a form that destabilized the Soviet empire. In order to understand how the Soviet leadership lost their self-confidence we must interrogate their understanding of the malaise. In Part II, we shall see that the economic stagnation was only the backdrop of a profound spiritual crisis among the Soviet elite.


Was the Great Recession a Natural Calamity?

Tip of the hat to Adam Tooze for flagging Annie Lowry’s excellent round-up of the Great Recession’s impact on American society. After ‘the economy tipped into the deepest contraction of the post–World War II era’, Lowry writes, ‘the Great Recession’s scars remain’. The recession exacerbated troubling movements already underway: erosion of middle-skilled jobs, vertical polarization of the labor market, decline in labor market participation, economic insecurity, racial polarization, vertical polarization, regional polarization, and the opioid crisis. ‘A sicker, more unequal, more racially divided country: This is the legacy of the Great Recession.’ Her conclusions are worth quoting in full and bring the framing into sharp relief.

When the next recession comes the data on what to do about it will be there. Economists have pulled together plenty of studies of the dollar-for-dollar effectiveness of initiatives like extending unemployment insurance and increasing the size of the food-stamp programs, and the relative ineffectiveness of things like corporate tax cuts. Social scientists, social workers, and local officials have urged the importance of acting as quickly as possible to intervene, with efforts to stabilize financial markets, increase the deficit, and make monetary policy more accommodative. The country has now gone through three consecutive jobless recoveries, with downturns tending to amplify long-existing trend to hollow out the middle class, polarize the labor market, and hit already ailing regions hard. It seems likely that the next recession will do much the same.

The question is whether policymakers will take such evidence of the pain and scars left by the Great Recession into account. Congress is today on the verge of pushing forward a tax cut aimed at rich families and profitable corporations that will add more than a trillion dollars to the debt, with no real need for new economic stimulus at the moment. Meanwhile, it has declined to do much for the poorer families that are still feeling the worst effects of the last recession and have not yet recovered. The risk is that next time, they will get left even further behind.

In short, recessions are naturally-occurring calamities like hurricanes or earthquakes. The policy questions they raise are about the effectiveness of various measures to deal with them. Macroeconomic studies provide insights into the shock; microeconomic studies allow us to explore the impact of the shock on different markets and social groups. We have learned a lot since 2008 and these acquired knowledges should be brought to bear ‘the next time’. A rational policy framework must incorporate all these insights to fight the next one. This is not Lowry’s personal frame of reference. This is the dominant frame used by economists and laypeople to think about the Great Recession.

All frames are necessarily partial; they illuminate some aspects of reality and leave others in the dark. The recessions as natural calamities frame is especially problematic. For the scale and virulence of the Great Recession was not the result of a random draw; nor was it independent of economic policies pursued. The scale and virulence of the Great Recession was due above all to the unprecedented amplitude of the financial cycle. The recessions as natural calamities frame leaves out the most important policy lesson to be learned from the catastrophe: financial booms are extremely dangerous and must be tamped down vigorously. The principal policy failure did not occur in 2009-2010; it occurred in 2004-2006. Policymakers and regulators failed to appreciate the build-up of great financial imbalances. And that failure led directly to the catastrophe of the Great Recession.

US financial cycle

Source: Claudio Borio.