Markets

A Major Rethink is Underway at the Fed

United States monetary policymakers made their bones during the 1970s stagflation crisis. Figure 1 displays the macroeconomic vitals and the policy rate from the mid-1960s to the mid-1990s.

Stagflation

Figure 1. Unemployment, core inflation, output gap and the policy rate from the mid-1960s to the mid-1990s.

The stagflation crisis taught central bankers that inflation can be very costly to tame; that inflation expectations play a dominant role in the inflation process and are even harder to tame; that elected officials with an eye on the next election have an inflationary bias so that the Fed had to be sufficiently independent of politics to deliver the bitter medicine. Most of all, they learned that the Fed had to stay one step ahead of inflation. Specifically, they learned that they had to tighten policy in anticipation of an acceleration in inflation. And that they could rely on the Phillips curve to anticipate inflation. That is, when domestic measures of slack (such as the unemployment rate or the output gap) showed that the economy was overheating, they could reliably expect inflation to pick up.

Inflation expectations are said to be anchored if temporary shocks don’t change long-run expectations; that is, they are relatively insensitive to incoming data. Consistent with its price stability mandate, the Fed wants the public’s inflation expectations to be anchored near the target rate of inflation. Although the Fed did not officially adopt a numerical target until 2012 when it chose 2 percent, it was widely understood to be within that ballpark under Greenspan. Nevertheless, it literally took decades. By the end of the 20th century, they had finally managed to anchor inflation expectations close to the target.

tame

Figure 2. Expected inflation in the United States.

But just as central bankers began to congratulate themselves for finally having anchored the public’s inflation expectations, the inflation process mutated. The Phillips curve, on which the Fed (and all macroeconomic models) relied on to forecast inflation, weakened in the 1990s and broke down after 2000. Figure 3 displays the breakdown.

phillipscurvebroken

Figure 3. The Phillips curve weakened in the 1990s and broke down in the 2000s.

As a result of the second unbundling and the further integration of global product markets, global slack replaced domestic slack as the strongest predictor of changes in inflation. In other words, the inflation process became globalized. But Fed policymakers continue to rely on measures of domestic slack to anticipate inflation even as they concede that the relationship has weakened.

Since domestic measures of slack have vanished, the Fed expects inflation to be around the corner. It has hiked four times (by 25 basis points each time) in the past two years (12’15, 12’16, 3’17, 6’17). Bond markets are assigning an 80 percent probability to another 25 basis point hike this December. Meanwhile, the Fed has blamed transitory factors (such as one-off mobile price plan changes) for the failure of inflation to pick up.

USmacro

Figure 4. US unemployment, output gap, core inflation and the policy rate.

Things are now coming to a head. Core inflation slowed to just 1.3 per cent year-on-year in August despite further tightening in the labor market and the vanishing of the output gap. Minutes of the FOMC’s September meeting show policymakers troubled by the failure of inflation to appear as anticipated. From the minutes:

Many participants expressed concern that the low inflation readings this year might reflect not only transitory factors, but also the influence of developments that could prove more persistent, and it was noted that some patience in removing policy accommodation while assessing trends in inflation was warranted.

Meanwhile, Daniel Tarullo, who left the Fed’s Board of Governors this year, confessed that the Fed is driving blind. “The substantive point,” he said, “is that we do not, at present, have a theory of inflation dynamics that works sufficiently well to be of use for the business of real-time monetary policymaking.” The Fed should therefore not rely on the Phillips curve, but instead pay more attention to “observables”. That’s just a fancy way of saying that the Fed should wait to see the whites of inflation’s eyes before tightening further.

So the doves are gaining the upper hand at the FOMC. But even more significant developments are underway.

Former Fed Chair Ben Bernanke, the intellectual father of extraordinary monetary policy, has proposed a new monetary policy framework that makes the recent hikes look even more suboptimal.

A central bank could target the rate of inflation or the price level. When the monetary authority targets inflation, it responds to cyclical departures from the target rate by leaning against the departure in order to push inflation back to target. It does not bother “making up” for lost inflation. With price level targeting on the other hand, the monetary authority obeys the “makeup principle”. If inflation is too low, policy remains accommodative even after the target is hit; letting it overshoot to make up for lost inflation. Under price level targeting, average inflation is likely to be close to the target over the medium term (ie, over the cycle). But there are issues with price level targeting.

Price level targeting becomes problematic when there is a negative productivity shock that pushes up inflation. For then the monetary authority is committed (as it must be to maintain credibility) to punish the economy well after inflation is under control. In the extreme, periods of high inflation would call for prolonged lowflation or even outright deflation to get back to the target price level. That’s close to being unacceptable.

Bernanke’s proposal instead requires the monetary authority to practice inflation targeting under normal conditions but shift to price level targeting once the economy hits the zero lower-bound. In effect, the monetary authority would commit to “lower for longer” for much longer. It would run the economy too hot for as long as it takes for the actual price level to close the gap with the target price level. It thus solves the problem of policy asymmetry at the zero lower-bound by essentially borrowing policy room from the future.

Bernanke

Figure 5. Price level, inflation and the policy rate.

This is the most accommodative monetary policy framework that has ever been proposed. It goes well beyond waiting for the whites of inflation’s eye to begin tightening. Figure 5 shows the price level, inflation rate and the policy rate since we hit the zero lower-bound. Since inflation has run persistently below target for essentially the entire period, the actual price level has continued to diverge from the proposal target level (which mechanically increases at the rate of 2 percent per annum). Under his proposal, not only would the Fed not have hiked until today, it would commit to not hiking for many, many years to come. Given that system-wide overcapacity is likely to persist for a long time and assuming that global slack continues to drive US inflation, the nominal policy rate under his proposal would remain stuck at the zero lower-bound through to 2030!!

This amounts to a thinly-veiled but nonetheless extraordinarily powerful critique of Fed policy. Bernanke is in effect saying that the Fed should’ve never lifted off in anticipation of inflation. Instead, it should’ve promised to not lift-off even after observing above target inflation for a considerable amount of time.

Lael Brainard, perhaps the sharpest knife in the Fed drawer and not coincidently the Policy Tensor’s favorite central banker, favourably reviewed Bernanke’s proposal. Her remarks are worth reading in full. Both Bernanke and Brainard made their remarks at the Peterson Institute which has conveniently put the videos online. On that site, you can also find presentations by Summers and Blanchard of their joint paper on stabilization policy under secular stagnation—no doubt an important contribution.

So a major rethink is well underway among central bankers. And not a moment too soon. Reviewing these developments together with the markets makes it clear that the bond market is too confident of a December hike. That should get priced out soon.

 

Advertisements
Standard
Thinking

Krugman is Astonishingly Ignorant About the Global Financial Crisis

margin-call.png

Margin Call (2011)

The Policy Tensor has long admired Krugman’s crusade against zombie ideas in economics perpetuated by the political economy of K Street. But his latest demonstrates a remarkable ignorance of the causal mechanism behind the Global Financial Crisis (GFC):

True, nobody saw the crisis coming. But that wasn’t because orthodoxy had no room for such a thing – on the contrary, panics and bank runs are an old topic, discussed in every principles book. The reason nobody saw this coming was an empirical failure – few realized that the rise of shadow banking had done an end run around Depression-era bank safeguards.

The point was that only the dimmest of free-market ideologues reacted with utter bewilderment. The rest of us slapped our foreheads and said, “Diamond-Dybvig! How stupid of me! Diamond-Dybvig!”

Sorry, Paul. But the notion that Diamond-Dybvig explains the GFC is fundamentally wrong.

The Diamond-Dybvig model is a classic model of bank runs. Banks accept deposits that the depositors can demand at any time. They lend some of this money to firms and individuals who promise to pay back the loans over time (but not on demand). In the normal course of things, only a few depositors want their money at any given time. Given that the bank will be able to pay on demand there is no particular reason for depositors to worry about getting their money back. That’s the benign equilibrium. The problem is that there are multiple equilibria. In particular, there is a run equilibrium wherein for whatever reason too many people want their money back at the same time. Given that the bank may not be able to pay those at the back of the line, it makes sense for every depositor to try to get their money back before the bank runs out of money. The basic model of banking is therefore inherently exposed to such runs and is the reason why we need deposit insurance.

That’s simply not what’s happening in the GFC. The GFC was a systemic banking crisis; not a classic bank run. The Diamond-Dybvig model features a run on a single bank. Within that model, there is no mechanism for the run to spread to other banks, much less to engulf the whole system. Moreover, with few exceptions, depositors did not run on the banks during the GFC. Indeed, the crisis was centered not on traditional deposit-funded banks but on wholesale-funded investment banks. Furthermore, the banks at the center of the crisis did not originate loans and hold them on their own balance sheets (the traditional ‘originate-to-hold’ model). Instead, they sold these assets to investors willing to bear the risk (the modern ‘originate-to-distribute’ model).

One may claim that funding runs are similar enough to Diamond-Dybvig type depositor runs for the model to apply. One would be wrong. This is because wholesale funding is secured by collateral. If the borrower cannot pay, the lender can get her money back by selling the collateral. The secured lender therefore does not face the same incentives as a Diamond-Dybvig type depositor in an uninsured bank, who has no choice but to line up to get her money back if others are doing the same.

An entirely different mechanism generates runs in secured funding markets. Here one must distinguish between bilateral repo and tri-party repo. In the former, cash investors must take counterparty risk into consideration as well as the quality of the collateral since it can become problematic to secure the promised collateral in the event of a messy bankruptcy. In tri-party repo markets on the other hand, cash investors can be sure of getting their hands on the collateral (since it is in administrative custody of the third party and placed in a sort of escrow account that automatically delivers the collateral even if the third party gets in trouble) and the only thing that matters is the perceived quality of the collateral. More precisely, what matters is the certainty with which one can exchange the collateral for cash at par. So runs can obtain in bilateral repo markets if either borrowers or collateral are perceived as dicey. In tri-party repo markets on the other hand, runs can only obtain if asset classes used as collateral are no longer perceived as safe.

A run in the wholesale funding market generates firesales of assets which further intensifies the run in the funding market and so on. This is exactly the vicious doom-loop we observed in 2008. The reason why it is important to distinguish between the bilateral and tri-party repo markets is because it allows us to identify the causal mechanism behind the GFC.

During the GFC, there was a generalized run in bilateral repo markets as perceived counterparty risk spiked. Basically, cash investors lost confidence in dealers (ie, Wall St banks) and dealers lost confidence in each other as credit defaults mounted. No one could be sure who was hiding what on and off their balance sheets. In tri-party repo markets on the other hand, runs were confined to private-label residential mortgage-backed securities (RMBS). Even during the brutal week of the Lehman bankruptcy, the core of the flywheel continued to spin as cash investors lent trillions of dollars to dealers overnight against Tbills (obligations of the US Treasury) and agency RMBS (obligations of Fannie and Freddie assumed to be backed by the US government).

What this means is that the instability of the wholesale funding flywheel was due to the introduction of private-label RMBS as collateral. In other words, the GFC would not have acquired the virulence that it did had collateral remained restricted to state-backed assets. And the reason why private-label RMBS had to be used as collateral was the shortage of public safe assets. For it was the demand for safe assets emanating from the wholesale funding market that prompted the dealers to manufacture private-label RMBS at such a large scale using residential mortgages as raw material. That’s what caused the subprime lending boom. It was not the Fed’s low rates; not the result of declining standards at Fannie and Freddie; it was not a classic credit boom à la Reinhart and Rogoff; nor was it due to the global savings glut à la Bernanke. And the GFC itself was certainly not a classic Diamond-Dybvig type bank run as Krugman would have it. No, the financial boom was instead the great sucking sound of the wholesale funding market whose denouement was fittingly the seizure of this very market and of the Western financial system built around it by Wall Street.

Standard
Thinking

The Empirical Evidence for the Brenner Hypothesis

In the inaugural editorial for Catalyst, Robert Brenner notes that

Since 1973, the economies of the advanced capitalist countries have performed ever more poorly. The growth of GDP, investment, productivity, employment, real wages, and real consumption have all experienced a historic deceleration, which has proceeded without interruption, decade by decade, business cycle by business cycle, to the present day.

The source of this loss of dynamism has been the deep fall, and failure to recover, of the economy-wide rate of profit, a process that took place mainly from the late 1960s to the early 1980s and derived largely from the relentless buildup of overcapacity across the global manufacturing sector. 

Brenner identified the ultimate cause of the loss of Western economic dynamism as the relentless buildup of overcapacity across global manufacturing. It is hard to find numbers on capacity utilization rates at the global level. But we can instrument that with US manufacturing capacity utilization rate since utilization is unlikely to be high in the world’s preeminent advanced manufacturing power if there is significant global slack. In other words, we can proxy global overcapacity by US capacity utilization.

CapacityUtilization

Figure 1: Capacity utilization rate of US manufacturing. (Source: Haver Analytics.)

Figure 1 displays the capacity utilization rate of US manufacturing. We see that there has been a secular rise in overcapacity since the 1960s. However, Brenner is wrong that this buildup of overcapacity “took place mainly from the late 1960s to the early 1980s.” To the contrary, after a brief respite in the 1990s, the buildup gained renewed strength in the 2000s and the 2010s. In fact, peak capacity utilization rates in the 2000s failed to match even the lows reached in the 1990s, and the post-Great Recession recovery is even more tepid.

Predictably, ever-intensifying overcapacity in the manufacturing has generated a secular decline in the profit rate of this sector. Figure 2 displays the profit rate in US manufacturing. For this sector, Brenner’s right. Profit rates fell relentlessly through to 1985 and failed to recover thereafter. The largest declines occurred in the mid-1950s, late-1960s, and early-1980s. We know that the last was the consequence of the super-strong dollar that attended Volcker’s sky-high interest rates. The first was perhaps the understandable consequence of the recovery of European manufacturing firms that brought down highly elevated rates of profit for US firms who briefly had the global market to themselves in the late-1940s and early-1950s. The second, the sharp decline of the late-1960s, can be traced to the Japanese onslaught as suggested by the weight of cross-sectional evidence: the decline in profit rates was concentrated in the six sectors most exposed to Japanese competition.

ManufacturingProfitRate

Figure 2. Profit rate of US manufacturing defined as the ratio of profits after capital adjustment to net stock of fixed assets. The break reflects the fact that BEA has three different series for 1948-1987, 1988-2000, and 1998-2017. I display all three but don’t connect them because they are strictly speaking not comparable.

But manufacturing now accounts for only 12 percent of value-added in the United States. Is it true, as Brenner claims, that there was a “deep fall, and failure to recover, of the economy-wide rate of profit”? No, Brenner’s wrong. Figure 3 displays the rate of profit of all US corporations in 1948-2017.

ProfitRate

Figure 3. US corporate profit rate. (Source: BEA)

Contrary to Brenner’s claim, profit rates fell straight through 1966-1982, but then recovered much of the lost ground in the last three business cycles. How were US corporations able to restore profitability despite persistent industrial overcapacity? Simple. They confiscated the bulk of the growth in labor productivity. Figure 4 displays output per hour in US nonfinancial corporations and real labor compensation per hour.

Labor

Figure 4. Labor compensation and labor productivity in the US corporate sector.

A structural break in the productivity and labor compensation series can be detected in the 1970s. In 1947-1975, output per hour grew 125 percent, while labor compensation per hour grew 90 percent. Since 1975, output per hour has doubled, while labor compensation per hour has grown by only 30 percent. Equivalently, the annual rate of productivity growth fell from 2.7 percent in 1947-1975 to 1.8 percent in 1975-2016. Meanwhile, the average growth rate of real wages fell from a robust 2.2 percent in the first period to a measly 0.7 percent in the second.

All in all, in the neoliberal era, labor secured only 30 percent of the gains in productivity, whereas in the postwar New Deal-era, labor secured fully 72 percent of the pie. This is the secret sauce of restored corporate profitability despite persistent overcapacity. This is a major achievement of the neoliberal counterrevolution which must be understood as a political movement aimed at the restoration of profit rates and upper class wealth and power.

This, of course, doesn’t invalidate Brenner’s key insight that the loss of Western dynamism and the attendant disappearance of broad-based growth must be traced to persistent systemwide overcapacity. For while the neoliberal assault may have restored the fortunes of US corporations and the upper classes, the absence of dynamism is all too real. As I noted earlier on Facebook today in the context of the decline of Europe’s social democrats,

What the social democrats have failed to do is to articulate is a credible solution to the principal problem of contemporary Western political economy: the end of broad-based growth. It is this failure hidden in plain sight that has wrecked the center and empowered political forces opposed to any further pursuit of the open society on both the left and the right.

It’s also amply clear why the center-left has everywhere paid the price. For the center-left abandoned the pursuit of protection against market discipline when it folded in the face of the neoliberal ascendancy. Having surrendered that pursuit, the center-left failed to articulate how its vision differed from that of the center-right, which never championed protection against market discipline anyway and was therefore always the natural claimant of the neoliberal center.

But the fundamental problem goes deeper. For it is not at all clear that a solution to the core problem in fact exists. None certainly exists within the parameters of contemporary governmentality; that is to say, the neoliberal toolkit. And no credible solution outside the toolkit has so far been proposed. Even in France “the gist of Macron’s labor market reform” is to “push back the state and facilitate growth, and aim to reduce unemployment by making it easier to hire and fire people.” Good luck with that.

The contemporary impasse is a historical echo of the 1970s when Keynesianism—the preferred technique of macroeconomic management in the era of social market democracy—failed in the face of the stagflation crisis (which was itself blamed on excessive entitlement claims generated by social market democracy). That impasse led to the neoliberal counterrevolution. How and when we will come out of this one is the most important question of contemporary Western political economy.


Bonus charts. 

Bites

Figure 5. Capital’s portion of the productivity pie.

The bottom chart in Figure 5 shows capital’s portion of the “pie” of labor productivity growth (roughly growth in output per hour). It is a function of both labor productivity growth per se (dotted line in top chart) and the bargaining solution between labor and capital (who gets how much of that pie). The solid line in the top chart is labor’s portion of the pie (roughly growth in real wages per hour). [See Technical Notes at the end.]

I think the following three observations are called for. (1) There are brief respites in the early-1980s and the mid-1990s where productivity growth is decent if still modest and wage growth is suppressed. But there is a veritable bonanza in the 2000s. Not only is the respite from Brenner’s crisis strongest in the neoliberal era (ie, productivity growth briefly approaches postwar levels), wage growth is nonexistent. (2) Bites taken by capital are somewhat countercyclical. They fall to their lows in booms (late-1960s, late-1980s, late-1990s, mid-2000s), and are high early in the recoveries (early-1960s, early-1980s, mid-1990s, early-2000s). (3) Capital seems to be suffering under postcrisis stagnation even more than labor. It has been downhill since 2010. And 2015 was the worst year for capital since 1957.

EvidenceCausal

Figure 6. Productivity drives both wages and unemployment.

Figure 6 marshals the empirical evidence for the causal diagram “wages ← productivity → unemployment”. That is, productivity drives both wages and unemployment. The interpretation being that times are good when the economy is experiencing positive productivity shocks. Growth in the pie allows wages to rise. And since output grows with productivity, unemployment ought to fall. The monocausal model has clear implications for the conditional probabilities. Specifically, if negative unemployment shocks cause wage growth independent of productivity, then the bottom-left would have a distinct pattern and a high R^2. On the other hand, if positive productivity shocks cause wage growth independent of the unemployment rate, then the bottom-right would have the observed pattern. In short, the empirical correlations are what you would expect given the monocausal model that productivity drives both unemployment and wages.

Here’s another version of Figure 5.

Pie

Figure 7. American pie.

Technical Notes: 

Instead of looking at annual growth rates which are very noisy for both output per hour and labor compensation per hour, we stochastically detrend the two series by deducting from them their 4-year trailing averages. These detrended series are what are referred to as “shocks” in the charts above. They can be interpreted roughly as smoothed growth rates. When we write “X orthogonal to Y”, it is shorthand for contemporaneous variation in X orthogonal to variation in Y in the usual sense, ie we project X onto Y and obtain the OLS residuals.

Standard
Thinking

Sexual Competition, Mimetic Desire, and Neoliberal Market Society

In neoliberal market society, everyone faces the discipline of the market. In order to survive—ie in order to obtain the means to pay for the family’s food, rent, clothing, et cetera—ordinary people have to compete in the labor market. Investors have to compete with each other. And, of course, firms compete against their rivals. Indeed, even a pure monopolist has to worry about entry. A lot of ink has been spilt on the precariat workforce. But insecurity is the calling card of the neoliberal market society. Moreover, the closer we get to the neoliberal utopia, the fairer the verdict of the market. For the losers, there is no escape from the harsh sentence. So far we are still with Polanyi and Hoffer. But we must go further.

Houellebecq would suggest that we pay attention to the onset of ‘savage sexual competition’ as a result of the sexual revolution. In his account, the West made a wager with history that maximizing freedom would maximize happiness; and lost. (In Elementary Particles, Houellebecq articulates this pathos quite well but fails to surmount the last chapter problem. He picks up the same subject again in Soumission and this time gets his pathetic characters to ‘slouch towards Mecca‘ to escape their predicament.) For the neoliberal subject the rigors of sexual competition are, if anything, harsher than the rigors of the labor market. The liberalization of the sexual economy from the 1960s onwards must therefore be seen as an important component of the neoliberal condition. We need to investigate the market discipline faced by the neoliberal subject in the sexual marketplace. But we must go further still.

If we want to ground the analysis of neoliberal market society in the lived experience of the neoliberal subject, we must reckon with mimetic desire à la Girard. For we are not only enslaved by an obligation to enjoy. The pleasure principle is merely the beginning. What a neoliberal subject wants is not some independent draw from a fixed and exogenously given distribution. Nor are her desires merely correlated with those of others. She wants what others want because they want it. So she wants the iPhone X because, admit it, it’s cool. She knows what they will all think when they see the device in her hand. But that’s simply a case of ‘keeping up with the Joneses’—which is what we tend to find across consumer markets; a largely benign case of mimetic desire. Things get much more zero-sum when only a few can win and others must lose. The higher the college’s ranking, the more attractive the potential partner, the more prestigious the job etc, the more cut-throat it gets. Because desire is largely mimetic, neoliberal subjects are always found locked in mimetic rivalry—with their friends in high school, at work, or at a bar; and anonymously, with an amorphous mass of peers on job sites, dating platforms and so on and so forth.

In Girard’s scheme, mimetic rivalry engenders violence in archaic society. The instability intensifies until everyone gangs up against a single victim; a scapegoat, whose murder at the hands of the mob restores consensus and reestablishes the social peace. The ‘sacrificial crisis’ and the act of collective violence are not mythical, but real; a common solution arrived at by all archaic societies and ritualized as human sacrifice. This is the dark heart of our all-too-human past.

Neoliberal market society unleashes mimetic rivalry on an unprecedented scale. But it does so in a contained manner. Moreover, the modern state enjoys a near-absolute monopoly of violence. The violence engendered by the intensification of mimetic rivalry is therefore projected onto other domains—onto the political plane, onto video-games, on screen, onto 4chan, and perhaps most tragically, onto the neoliberal self. Perhaps that what is behind the curves studied by Case and Deaton.

If we want to interrogate the neoliberal condition, we must go beyond the whipsaw of global macro forces; beyond market discipline as currently understood in terms of the commodification of labor and capital. We need to start thinking about the market discipline faced by the neoliberal subject in the sexual marketplace as well. More generally, we need to take a broader view of the ever-intensifying market-like competition in neoliberal market society, eg for college admissions. We must get a handle on the attendant intensification of mimetic rivalry; the trauma thereby visited upon the neoliberal subject; and the socio-political consequences of that trauma.

 

 

Standard
Thinking

Something very peculiar is going on at the top of the US wealth distribution

Inspired by the FT piece on the world’s überwealthy, I decided to explore the very top of the US wealth distribution. Figure 1 displays the average net worths in constant 2016 dollars of the top 1 percent, 0.1 percent, 0.01 percent, and 0.001 percent of the wealthiest adults in the United States over the past 100 years. (All data that appears in this post is from here.)

NetWorths

Figure 1. Average net worths at the top of the food chain.

We see that the neoliberal wealth boom is simply unprecedented. After fluctuating at historical levels until the 1980s, the fortunes of the richest Americans took off like a rocket. The rich have never been quite as rich as they are today.

Figure 2 zooms into the past twenty years. Within a very broad upward march, we can see dramatic fluctuations with the asset price booms of the late-1990s and the mid-2000s. But notice how the fortunes of the really rich had a different trajectory from the merely rich; especially over the past decade. Why?

AvgNetWorth19952015

Figure 2. Average net worth of the wealthiest over the past twenty years.

Figure 3 zooms in even further to 2004-2014 and allows us to examine the anomaly up close. While the total net worth of the top 1 percent and the 0.1 percent contracted sharply in 2009, that of the top 0.01 percent and the top 0.001 percent suffered only a mild correction. Concretely, while the former fell by 17 and 16 percent respectively in 2007-2009, the latter fell by only 3 and 4 percent. Why? Conversely, the former grew by 6 and 4 percent in 2010-2011, while the latter contracted by 17 and 13 percent. Why??

CloseUp

Figure 3. Average net worths of the top echelons.

Perhaps thresholds contain some information that may help us figure out what’s going on here. Figure 4 displays the threshold net wealth required for admission into these rarified echelons since the mid-1960s.

Thresholds

Figure 4. Threshold net worths for the upper echelons.

We see that fluctuations in the top 1 percent, 0.1 percent and 0.01 percent are similar: Rapid rises in the late-90s and mid-2000s booms and sharp corrections during the recessions. But quite strikingly even the bottom rung of the top 0.001 percent seem to have avoided a comparable loss in 2008-2009. Again, we zoom in to see what’s going on over the past decade or so. Figure 5 below displays the threshold net worths for the upper echelons over 2004-2014.

Thres2

Figure 5. Threshold net worths for the upper echelons.

The evidence from the bottom rungs of the upper echelons is even more striking. It is clear that the very richest of individuals were able to protect themselves much better against global macro fluctuations than those right below them.

The 0.001 percent constitute the extreme top of the wealth distribution reported in the Piketty-Saez-Zucman database. The minimum personal net wealth in this rarified realm is a staggering $530 million. There are approximately 2,000 adults in the United States who clear that threshold. Their average net worth is $2.1 billion, up an astounding 792 percent since 1985. By comparison, the average net worth of the top 1 percent grew 322 percent and the average net worth of US households grew 289 percent over the same 30 year period 1985-2014.

More generally, over the past thirty years, the further up we go, the greater the gain. Table 1 displays the compounded rate of growth of average net worths in the upper echelons. The differentials may look small until you recall the magic of compounding. If your wealth grew at 7.2 percent instead of 4 percent, you’ll end up 2.5 times richer in thirty years. See the third column of Table 1 for exact figures.

Table 1: Accumulation rates.

Annual growth in net worth (1985-2014, compounded) In 30 years $1,000 invested at these rates accumulates to…
1 percent 4.03% $3,272
0.1 percent 5.04% $4,372
0.01 percent 6.14% $5,975
0.001 percent 7.24% $8,141

Piketty has shown in Capital that larger fortunes grow at higher rates. The Piketty-Saez-Zucman database corroborates that finding. One reason why that holds is that the truly wealthy have access to lucrative investment strategies unavailable to lesser investors. Another is that business equity accounts for a much greater portion of their net worth than that of lesser mortals. Yet another may be that they have easier access to leverage (which mechanically increases return on equity).

But all this still doesn’t explain how the truly rich enjoy greater protection against global macro fluctuations. Surely, they didn’t all short US housing in 2007? Perhaps the truly wealthy avoided the 2008-2009 bloodbath simply because housing is an insignificant portion of their portfolio?

It could also be that the billionaires who populate the very top of the food chain own serious equity in superstar firms which continued to perform relatively well through the financial crisis and the recession? Is that what explains this anomaly?

However this anomaly is resolved, one thing is clear. We should be very careful in extrapolating what we see in, say, the Forbes 400 to the rest of the rich. The oligarchs are in a class all by themselves.


Bonus round. In 2014Q4-2017Q1, US household wealth grew by 12.9 percent according to the Federal Reserve. If net personal wealth grew at the same rate it would be around $79 trillion. And if the shares of the upper echelons remain unchanged, the estimated aggregate net worths of the top 1 percent, 0.1 percent, 0.01 percent and 0.001 percent would be $29, $15, $8, and $4 trillion dollars respectively. That would place the aggregate net worth of the 1 percent at roughly the same level as the aggregate personal wealth of all US residents as late as 1990. See Figure 6 below.

Table 2: Aggregate wealth and wealth shares of the upper echelons (2014).

Aggregate Net Worth (trillions of 2016 dollars) Shares
1 percent 26 37%
0.1 percent 13 19%
0.01 percent 7 10%
0.001 percent 4 5%
Net Personal Wealth 70 100%
NPW

Figure 6. Aggregate personal wealth of US residents. 

Standard
Markets

Are Shocks to Housing Priced in the Cross-Section of Stock Returns?

In the previous post I argued that the risk premium on property is due to the fact that the marginal investor in housing is your average homeowner who finds it extraordinarily hard to diversify away the risk posed by her single-family home to her balance sheet. If I am right, this means that housing wealth is a systematic risk factor that ought to be priced in the cross-section of expected stock excess returns (ie, returns in excess of the risk-free rate).

Assume that the marginal investor in the stock market is your average homeowner. Since it is so hard for her to diversify away the risk posed by fluctuations in property values, she should value stocks that do well when property markets tank. Conversely, stocks whose returns covary with returns on property should be less valuable to her. Given our assumption that your average homeowner is the marginal investor in equities, expected returns on stocks whose returns covary strongly with property returns should be higher than expected returns on stocks whose returns covary weakly (or better yet, negatively) with property returns. This is what it means for shocks to housing to be priced in the cross-section of expected stock returns.

We want our risk factor to capture broad-based fluctuations in housing wealth. Ideally, we would use a quarterly time-series for total returns (including both rent and capital gains) on housing wealth owned directly by US households. I am unaware of the existence of such a dataset—if you know where I can find the data, please get in touch.

We can also instrument fluctuations in housing wealth by using a property price index. Here we use the US property price index reported by the Bank of International Settlements. For return data we use 250 test assets from Kenneth French’s website. (The same dataset I used in my paper, “The Risk Premium on Balance Sheet Capacity.”)

We’re now going to jump straight into the results. For our econometric strategy please see the appendix at the bottom.

Figure 1 displays a scatterplot of the cross-section of expected stock returns. Along the X axis we have the betas (the sensitivity of the portfolio’s return to property returns) for our 250 test assets, and along the Y axis we have the mean excess returns of the portfolios over the period 1975-2016.

SingleFac_Property

Figure 1. Housing and the cross-section of expected stock excess returns.

Guys, this is not bad at all. Our single factor explains 20 percent of the cross-sectional variation in expected excess returns. By comparison, the celebrated Capital Asset Pricing Model, for instance, is a complete washout.

CAPM.png

Figure 2. The CAPM fails catastrophically in explaining the cross-section of expected excess returns.

It is very hard for single factor models to exhibit such performance. Table 1 displays the results from the second pass. We see that the mean absolute pricing error is large because the zero-beta rate does not vanish. Indeed, at 1.8 percent per quarter it is simply not credible. But the risk premium on property returns is non-trivial and significant at the 5 percent level.

Table 1. Property returns and the cross-section

Estimate Std Error p-Value
Zero-beta rate 0.018 0.006 0.002
Property return 0.007 0.004 0.049
R^2 0.195
Adj-R^2 0.192
MAPE 0.022

I have a lot of professional stake in the failure of this model actually. I have argued that stock returns are explained by fluctuations in the risk-bearing capacity of the market-based financial intermediary sector. In other words, the central thrust of my work is to say that we ought to pay less attention to the small-fry and considerably greater attention to the risk appetite of the big fish, for that is what drives market-wide risk appetite. Fortunately for my thesis, property shocks do well, but not nearly as well as balance sheet capacity.

Figure 3 displays yet another scatterplot for the cross-section. On the X axis we have the factor betas (the sensitivities of the portfolios to balance sheet capacity) and on the Y axis we have, as usual, mean excess returns over 1975-2016.

BSC_CrossSection

Figure 3. Balance sheet capacity explains the cross-section of stock returns.

In Table 2 and Figure 3 we’re only looking at a single-factor model with balance sheet capacity as the sole systematic risk factor. That’s a parsimonious theory that says: exposure to fluctuations in the risk-bearing capacity of broker-dealers explains the cross-section of asset returns. The empirical evidence is pretty compelling that this is the right theory. We see that balance sheet capacity singlehandedly explains 44 percent of the cross-sectional variation in expected stock excess returns. What is also manifest is the vanishing of the zero-beta rate; and the attendant vanishing of the mean absolute pricing error. Other single factor models cannot even dream of competing with balance sheet capacity in terms of pricing error. Indeed, I have shown in my paper that even the pricing errors of standard multifactor benchmarks, Fama and French’s 3-factor model and Carhart’s 4-factor model, are significantly bigger than our single factor model’s 48 basis points. We can thus have good confidence that the evidence does not reject our parsimonious asset pricing model.

Table 2. The Primacy of Balance Sheet Capacity

Estimate Std Error p-Value
Zero-beta rate 0.002 0.010 0.440
Balance sheet capacity 0.095 0.038 0.007
R^2 0.442
Adj-R^2 0.440
MAPE 0.005

I know what you are thinking. If these things are priced in, there must be a way to make money off it. How do I get some of that juicy risk premium? Aren’t they non-traded factors? Yes, they are. But you can still harvest the risk premium on non-traded factors, eg by constructing factor mimicking portfolios. Briefly, you project your factor onto a bunch of traded portfolios and use the coefficients as weights to construct a portfolio that tracks your non-traded factor.

Figure 4 displays the risk-adjusted performance of portfolios that track benchmark risk factors and the two risk factors discussed in this essay. We report Sharpe ratios (the ratio of a portfolio’s mean excess return to the volatility of the portfolio return) rescaled by the volatility of the market portfolio for ease of interpretation.

Premia

Figure 4. Risk-adjusted performance of traded portfolios for size, market, value, momentum, property, and balance sheet capacity.

The results are consistent with our previous findings. The stock portfolio that tracks property outperforms standard benchmarks convincingly. In turn, the portfolio that tracks balance sheet capacity outperforms the portfolio that tracks property. But let’s be very clear about what Figure 4 does not say. There is no free lunch. More precisely, there is no risk-free arbitrage.

The existence of these two risk premiums imply instead that there is risk arbitrage. That is, you can obtain superior risk-adjusted returns than the market portfolio by systematically harvesting these risk premiums. The existence of the two risk premiums is due to structural features. Specifically, the property premium exists because non-rich homeowners must be compensated for their exposure to housing; while the risk premium on balance sheet capacity exists because of structural features of the market-based financial intermediary sector—features that I explain in detail in the introduction of my paper. Since we can expect these structural features to persist, we should therefore not expect these risk premiums to vanish (or perhaps even attenuate much) upon discovery.


Appendix. Cross-Sectional Asset Pricing

We can check whether any given risk factor is priced in the cross-section of excess returns using standard 2-pass regressions where you first project excess returns \left(R_{i,t}\right) onto the risk factor \left(f_t\right) in the time series to obtain factor betas \left(\beta_i\right) for assets i=1,\dots,N,

R_{i,t}=\alpha+\beta_i f_{t}+\varepsilon_{i,t}, \qquad t=1,\dots,T,

and then project mean excess returns \left(\bar R_i\right) onto the betas in the cross-section to obtain the price of risk \lambda,

\bar R_{i}=\gamma^{0}+\lambda\hat\beta_{i}+e_{i}, \qquad i=1,\dots,N.

The scalar \gamma^{0} is called the zero-beta rate. If there is no arbitrage, the zero-beta rate must vanish. If the zero-beta rate is statistically and economically different from zero, then that is a failure of the model. That’s why the mean absolute pricing error is a better metric for the failure of an asset pricing model than adjusted-R^2. It’s given by,

\text{MAPE}:=|\gamma^{0}|+\sum_{i=1}^{N}\omega_{i}|\hat e_{i}|,

where \omega_{i} are weights that we will discuss presently.

If you try this at home, you need to know that (1) ordinary least squares (OLS) is inefficient in the sense that the estimator no longer has the lowest variance among all linear unbiased estimators; (2) OLS standard errors are an order of magnitude too low (and the estimated coefficients are attenuated, though still consistent) because their computation assumes that the betas are known, whereas we are in fact estimating them with considerable noise in the first pass.

The solution to (1) is well-known. Simply use weighted least squares (WLS) where the weights are inversely proportional to the mean squared errors of the time-series regressions,

\omega_i \propto \left[\frac1{T}\sum_{t=1}^{T}\hat\varepsilon^2_{i,t}\right]^{-1},\qquad \sum_{i=1}^{N}\omega_{i}=1.

The solution to (2) is to use errors-in-variable (EIV) corrected standard errors. In our work, we always use WLS for the second pass and report EIV-corrected standard errors wherever appropriate.

 

 

 

 

Standard
Thinking

Why Housing Has Outperformed Equities Over the Long Run

Jorda et al. are at it again. Over the past few years, they have constructed the most useful international macrofinancial dataset extending back to 1870 and covering 16 rich countries. The Policy Tensor has worked with the previous iteration of their dataset. I documented the reemergence of the financial cycle; the empirical law that all financial booms are, without exception, attended by real-estate booms; and that what explains medium-term fluctuations not just in real rates (a result originally obtained by Rey) but also in property returns, is the consumption to wealth ratio (equity returns on the other hand are explained by balance sheet capacity, not the consumption to wealth ratio).

There are two main findings in Jorda et al. (2017). First, they corroborate Piketty’s empirical law that the rate of return exceeds the growth rate. The gap is persistent and is only violated for any length of time during the world wars. Excluding these two ‘ultra-shortage of safe asset’-periods, the gap has averaged 4 percent per annum. That is definitely enough to relentlessly increase the ratio of wealth to income and drive stratification, as Piketty has shown.

Piketty

Jorda et al. (2017)

The second finding is truly novel. Jorda et al. (2017) find that housing has dramatically outperformed equities over the long run. This is true not just in the aggregate but also at the country level.

HousingEquities

Jorda et al. (2017)

Matt Klein over at Alphaville is truly puzzled by this failure of standard asset pricing theory. As he explains,

The ratio between the average yearly return above the short-term risk free rate and the annual standard deviation of those returns — the Sharpe Ratio— should be roughly equivalent across asset classes over long stretches time. There might be short periods when an asset class’s Sharpe ratio looks unusually high, especially in individual countries, but things tend to revert to their long-term average sooner or later.

More generally, the expectation of asset pricing theory is that Sharpe ratios should be roughly equal across not just asset classes but arbitrary portfolios as well. Deviations from equality imply the existence of extraordinary risk premia which ought to be eliminated through investors’ search for higher risk-adjusted returns.

This, of course, goes back to the hegemonic idea of Western thought. Competition serves as the organizing principle of evolutionary biology, economic theory, and international relations; as the cornerstone of America’s national ideology; and as the guiding star of modern governance and reform efforts. But there are some rather striking anomalies of this otherwise compelling broad-brush picture of the world—persistent sources of economic rents and the existence of substantial risk premia, eg on balance sheet capacity.

But I believe something much more elementary is going on with property. The next figure shows the global wealth portfolio. We see that housing constitutes the bulk of global wealth.

portfolio

Jorda et al. (2017)

What explains the superior risk-adjusted performance of housing is the fact that housing assets are not, in fact, owned by the rich or market-based financial intermediaries like other asset classes, but quite broadly held by the small-fry. More precisely, the marginal investor in housing is your average homeowner who finds it extraordinarily hard to diversify away the risk posed by her single-family home to her balance sheet. Since it is so hard for her to diversify this risk away, she must be compensated for that risk.

Put another way, the risk premium on property is high because property returns are low when the marginal value of wealth to the marginal investor is high (ie, when times are bad for the average homeowner) and high precisely when the marginal value of wealth to the marginal investor is low (ie, when times are good for the average homeowner). This is as it should be given the relatively progressive vertical distribution of housing wealth.

Standard