Shoshana Zuboff, ‘one of our most prescient and profound thinkers on the rise of the digital,’ Andrew Keen informs us on the blurb, has rendered ‘a book of immense ambition and erudition’. Naomi Klein thinks ‘everyone should read this book as an act of digital self-defense’. Joseph Turow concludes that from now on, ‘all serious writing on the internet and society will have to take into account The Age of Surveillance Capitalism.’ Robert B. Reich’s blurb calls it ‘a masterpiece of rare conceptual daring’, wherein she ‘demonstrates the unprecedented challenges to human autonomy, social solidarity, and democracy perpetrated by this rogue capitalism.’
It is hard not to concur with them all. Zuboff has identified a distinct regime of accumulation that emerged from the wreckage of the Internet stock bubble when first Google and then Facebook articulated the lucrative commodification of mass surveillance data. In doing so they have enclosed a vast new frontier without permission. Zuboff goes so as to declare that surveillance data (“behavioral surplus” is her neologism) is a fourth fictional commodity after Polanyi’s Land, Labor and Money (p. 100):
Today’s owners of surveillance capital have declared a fourth fictional commodity expropriated from the experiential realities of human beings whose bodies, thoughts, and feelings are as virgin and blameless as nature’s once-plentiful meadows and forests before they fell to the market dynamic. In this new logic, human experience is subjugated to surveillance capitalism’s market mechanisms and reborn as “behavior.” These behaviors are rendered into data, ready to take their place in a numberless queue that feeds the machines for fabrication into predictions and eventual exchange in the new behavioral futures markets.… In this future we are exiles from our own behavior, denied access to or control over knowledge derived from its dispossession by others for others. Knowledge, authority, and power rest with surveillance capital, for which we are merely “human natural resources.”
Her narrative is at its most powerful when later in the book she ties Facebook’s addiction machine to Natasha Dow Schüll’s masterpiece on machine gambling in Las Vegas. After documenting the scale of addiction to the social network and the corporation’s efforts towards that instrumental goal, she notes chillingly (p. 466),
All those outlays of genius and money are devoted to this one goal of keeping users, especially young users, plastered to the social mirror like bugs on the windshield.
But Zuboff’s account suffers from two flaws. The first, non-fatal flaw is that she drops the ball on a number of important threads that ought to have been more vigorously pursued. We get bare glimpses of the ecosystem of surveillance capitalism and great power intelligence agencies on the inside of the one-way mirror. Ditto the Obama-Google revolving door. The second, fatal flaw is that she drops the ball on the very mechanism of accumulation she identifies with such clarity. Zuboff brackets such potentially fruitful interrogation into the technical discourse of the high-priests of machine learning—presumably beyond her and the reader’s comprehension. That is very unfortunate. So on the prospects of solving the problem of causal inference from mass surveillance data, we don’t hear a word. She takes the 52 data scientists she interviewed at their word when they echo Silicon Valley’s visions of absolute power.
The fundamental problem with Zuboff’s account then is that she actually buys Skinner’s presumption that sufficiently high resolution surveillance data can and will allow those on the right side of the panopticon to predict the behavior of those on the wrong side with near-perfect fidelity. In reality, whether the problem, and it is a scientific problem, is even solvable in principle is not known. Indeed, there are good reasons to believe that it is not.
In fact, the same kind of problem plagues machine learning, molecular anthropology, and brain science (see Marsh’s sober and excellent take). In all three fields, predicting human behavior from fine-scale Big Data (respectively from mass surveillance, DNA and brain activity) has defeated all attempts. What you have in all cases is a massive torrent of data with very little by way of maps.
Imagine that you could identify the life histories of all fish in the world ocean right down to the last detail and suppose that you have access to free and unbounded computational resources (an impossibility). Even so, there is no reason to believe that there exist algorithms that could be discovered (whether by machines or human geniuses) that will allow you to predict with certainty which fish in a given school the shark will catch. All that may be (and is known to be) achievable is probabilistic statements (say relating the probability of being caught to the position of the fish in the school). Arbitrary amount of data cannot solve the problem of coming up with a theory of fish behavior, even if such a theory were to exist—not that we have any good reason to believe that it does.
As David Deutsch has argued, all scientific problems require specific solutions whose existence cannot simply be assumed. Even assuming that the problems are solvable in principle, we cannot know how many years, decades, centuries, or millennia it will take to solve them. Radical uncertainty is the calling card of original scientific research. What all this means is that it is entirely possible, nay likely, that almost all hopes for predicting human behavior with high fidelity projected onto surveillance data, DNA, brain scans, and so on, may very well be dashed. Simply put, Google cannot predict the next sentence I will write — even if it trains its algorithms on all texts ever authored by the human race including myself — because it does not have a theory of my mind, something that cannot be reverse engineered from surveillance data.
The underlying reason for sober pessimism on all such questions is well understood. Put simply, the reductionist paradigm — and all such hopes are based on it — is known to be of limited value in light of the overwhelming evidence for emergent complexity. Higher order open systems, where all the interesting causal vectors are located, defeat the search for a general theory in detail. That is, each higher level of analysis has its own properties and internal logic that simply cannot be identified at lower levels of analysis. And high fidelity prediction requires identifying the correct causal structure at all relevant levels of analysis simultaneously. But we have no reason to believe that we can, or ever will be able to, identify all relevant levels of analysis; much less correctly identify their causal structure.
So, no, Google is not on the cusp of world domination. Google insiders’ ‘physics of clicks’ is best seen as intellectual bluster, not the harbinger of an automated society as Zuboff would have us believe.
Zuboff poses three fundamental questions throughout the book — Who knows? Who decides? Who decides who decides?
Who knows? Those on the right side of the one-way mirror have a clear asymmetric information advantage over those on the wrong side. But how much do they actually “know”? What they have is an ocean’s worth of surveillance data that they can probe with pattern recognition algorithms. This gives them some handle on broad patterns (ie they can build models and test them against the data) and they can examine the life world of “a person of interest” with a fine-toothed comb. But they can neither predict the future evolution of the broad patterns with much more certainty beyond what could already be done in the world of small data, nor can they predict the future behavior of the person of interest with greater confidence than a serious intelligence analyst without access to full-spectrum surveillance. So yes, we must recognize these new technologies of power and their potential for abuse. But let’s not pretend that they are all powerful. They are not: Data science is just fancy regression and ubiqutious surveillance is just fancy wire-tapping.
Who decides? Surveillance capitalists have laid claim to ownership of our vast digitally visible behavior. Just because they discovered the continent does not mean that they get to keep it. I second Zuboff’s call to make a sustained political effort to reverse the enclosure. What has been illegitimately dispossessed can indeed be repossessed. What we need is big push towards public oversight; one that goes beyond questions of monopoly and privacy and aims to essentially reverse the commodification of our behavior and ban automatic compliance (Zuboff’s “means of behavior modification”).
Who decides who decides? We can again agree with Zuboff that the discourse of “inevitability” is a transparent attempt at depoliticization. As she shows, other paths were indeed imagined; as when Google unveiled plans for Google Home where the householder was to have complete control over access to the data generated by connected devices. A different road was taken where your devices (“the Internet of Things”) are designed to generate surveillance data to be sold to those with a pecuniary interest in your future behavior. As her account shows, these decisions were taken unilaterally by technology firms under financial market pressure. They did not ask legal if it was compliant with the law and regulation. They just went ahead and did it. They got away with it, she argues, by moving at breakneck speed and under the radar; helped by the erosion of oversight institutions (especially the courts) under the neoliberal onslaught; and helped along too by the convergence of interests with the intelligence community after 9/11. It is now clear that this is politico-ethically unacceptable. Again, the onus is on us to mobilize politically to take back control of our digital footprint.
Zuboff mentions Andrew Tutt’s suggestion for an “FDA for algorithms” in passing. That is perhaps the single most constructive solution on offer and it is surprising to find her never returning to it. What is required is the incubation of a situated community of skilled software engineers and data scientists who can exercise effective oversight over their commercial counterparts in the public interest. An “FDA for algorithms” is precisely the right model for such an undertaking.
The Age of Surveillance Capitalism is compulsory reading. Zuboff has contributed greatly to our understanding of this rogue capitalism above all in identifying this specific regime of accumulation. So it is almost physically painful to discover that she has herself fallen for Silicon Valley’s Kool-Aid.