Silicon Valley’s Visions of Absolute Power

414JWlgTXGL._SY346_

Omnipotence is in front of us, almost within our reach…


Yuval Noah Harari

The word “disrupt” only appears thrice in Yuval Noah Harari’s Homo Deus: A Brief History of Tomorrow. That fact cannot save the book from being thrown into the Silicon Valley Kool-Aid wastebasket.

Hariri is an entertaining writer. There are plenty of anecdotes that stroke the imagination. There is the one about vampire bats loaning blood to each other. Then there’s the memorable quip from Woody Allen: Asked if he hoped to live forever through the silver screen, Allen replied, “I don’t want to achieve immortality through my work. I want to achieve it by not dying.” The book is littered with such clever yarns interspersed with sweeping, evidence-free claims. Many begin with “to the best of our knowledge” or some version thereof. Like this zinger: “To the best our knowledge, cats are able to imagine only things that actually exist in the world, like mice.” Umm, no, we don’t know that. Such fraudulent claims about scientific knowledge plague the book and undermine the author’s credibility. And they just don’t stop coming.

“To the best of our scientific understanding, the universe is a blind and purposeless process, full of sound and fury but signifying nothing.” How does one even pose this question scientifically?

“To the best of our knowledge” behaviorally modern humans’ decisive advantage over others was that they could exercise “flexible cooperation with countless number of strangers.” Unfortunately for the theory, modern humans eliminated their competitors well before any large-scale organization. During the Great Leap Forward—what’s technically called the Upper Paleolithic Revolution when we spread across the globe and eliminated all competition—mankind lived in small bands. There was virtually no “cooperation with countless strangers.” The reason why we prevailed everywhere and against every foe was because we had language, which allowed for unprecedented coordination within small bands. Harari seems completely unaware of the role of language in the ascent of modern humans. He claims that as people “spread into different lands and climates they lost touch with one another…” Umm, how exactly were modern humans in touch with each other across the vast expanse of Africa?

“To the best of our scientific understanding, determinism and randomness have divided the entire cake between them, leaving not a crumb for ‘freedom’…. Free will exists only in the imaginary stories we humans have invented.” Here, Harari takes one of the hardest open problems and pretends that science has an answer. The truth is much more sobering. Not only is there no scientific consensus on the matter of free will and consciousness, it would be disturbing if there were, since we have failed to develop the conceptual framework to attack the problem in the first place.

“According to the theory of evolution, all the choices animals make –whether of residence, food or mates – reflect their genetic code.… [I]f an animal freely chooses what to eat and with whom to mate, then natural selection is left with nothing to work with.” Nonsense. The theory of evolution, whether in the original or in its modern formulations, is entirely compatible with free will. Natural selection operates statistically and inter-generationally over populations, not on specific individuals. It leaves ample room for free will.


There are eleven chapters in the book. All the sweeping generalizations and hand-waving of the first ten chapters are merely a prelude to the final chapter. Here, Harari goes on the hard sell.

Dataism considers living organisms to be mere “biochemical algorithms” and “promises to provide the scientific holy grail that has eluded us for centuries: a single overarching theory that unifies all scientific disciplines….”

“You may not agree with the idea that organisms are algorithms” but “you should know that this is current scientific dogma…”

“Science is converging on an all-encompassing dogma, which says that organisms are algorithms, and life is data processing.”

“…capitalism won the Cold War because distributed data processing works better than centralized data processing, at least in periods of accelerating technological changes.”

“When Columbus first hooked up the Eurasian net to the American net, only a few bits of data could cross the ocean each year…”

“Intelligence is decoupling from consciousness” and “non-conscious but highly intelligent algorithms may soon know us better than we know ourselves.”

No, the current scientific dogma isn’t that organisms are algorithms. Nor is science converging on an all-encompassing dogma that says that life is data processing. Lack of incentives for innovation in the Warsaw Pact played a greater role in the outcome of the Cold War than the information-gathering deficiencies of centralized planning. When Columbus first “hooked up the Eurasian net to the American net,” much more than a few bits of data crossed the ocean. For instance, the epidemiological unification of the two worlds annihilated much of the New World population in short order.


There are more fundamental issues with Dataism, or more accurately, Data Supremacism. First, data is simply not enough. Without theory, it is impossible to make inferences from data, big or small. Think of the turkey. All year long, the turkey thinks that the human would feed and take care of it. Indeed, every day the evidence keeps piling up that humans want to protect the turkey. Then comes Thanksgiving.

Second, the data itself is not independent of reference frames. This is manifest in modern physics; in particular, in both relativity and quantum physics. What we observe critically depends on our choice of reference frame. For instance, if Alice and Bob measure a spatially-separated (more precisely, spacelike separated) pair of entangled particles, their observations may or may not be correlated depending on the axes onto which they project the quantum state. This is not an issue of decoherence. It is in principle impossible to extract information stored in a qubit without knowledge of the right reference frame. To go a step further, Kent (1999) has shown that observers can mask their communication from an eavesdropper (called Eve, obviously) if she doesn’t share their reference frame. Even more damningly, reference frames are a form of unspeakable information—information that, unlike other classical information, cannot be encoded into bits to be stored on media and transmitted on data links.

Third and most importantly, we do not have the luxury of assuming that an open problem will be solved at all, much less that it will be solved by a particular approach within a specific time-frame. This is a major source of radical uncertainty that is never going to go away. Think about cancer research. Big data and powerful new data science tools make the researchers’ jobs easier. But they cannot guarantee their success.

The main contribution of my doctoral thesis was solving the problem of reference frame alignment for observers trying to communicate in the vicinity of a black hole. The problem has no general solution. I exploited the locally-measurable symmetries of the spacetime to solve the problem. Observers located in the vicinity of a black hole can use my solution to communicate. If they don’t know my solution or don’t want to use it, they need to discover another solution that works. They cannot communicate otherwise. This is just one of countless examples where data plays at best a secondary role in solving concrete problems.

Empirical data is clearly very important for solving scientific, technical, economic, social, and psychological problems. But data is never enough. Much more is needed. Specifically, solving an open problem often requires a reformulation of the problem. That is, it often requires an entirely new theory. We don’t know yet if AI will ever be able to make the leap from calculator to theoretician. We cannot simply assume that they will be able to do so. They may run into insurmountable problems for which no solution may ever be found. However, if and when they do, there is no reason why humans should not be able to comprehend an AI’s theories. More powerful theories turn out to be simpler after all. And if and when that happens, the Policy Tensor for one would welcome our AI overlords.


Harari makes a big fuss about algorithms knowing you better than yourself. “Liberalism will collapse the day the system knows me better than I know myself.” Well, my weighing machine “knows” my weight better than I do. What difference does it make if an AI could tell me I really and truly have a 92 percent change of having a successful marriage with Mary and only 42 percent with Jane? Assuming that the AI knows me better than I do, why would I treat it any differently from my BMI calculator that insists that I am testing the upper bound of normality? After all, I also agree that the BMI calculator is more accurate than my subjective judgment about my fitness as the AI is about my love life.

Artificial Intelligence without consciousness is just a really fancy weighing machine. And data science is just a fancy version of simple linear regression. Why would Liberalism collapse if Silicon Valley delivers on its promises on AI? Won’t we double-down on the right to choose precisely because we can calibrate our choices much better?

If AI gain consciousness on the other hand, all bets are off. Whether as an existential threat or as a beneficial disruption, the arrival of the first Super AI will be an inflection point in human history. The arrival of advanced aliens poses similar risks to human civilization.

If you are interested in the potential of AI, you’re better off reading Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies. If you are curious about scientific progress and our technological future in deep time as well as the primacy of theory, you should read David Deutsch’s The Beginning of Infinity: Explanations That Transform the World. If you are more interested in the unification of the sciences, look no further than Peter Watson’s Convergence: The Idea at the Heart of Science. (Although I do recommend Watson’s The Modern Mind, The German Genius, and The Great Divide more and in that order.) Finally, for the limits of scientific and technical advance, see John D. Barrow’s Impossibility: The Limits of Science and the Science of Limits.


Silicon Valley’s Kool-Aid encompasses long-term visions of both techno-utopias and techno-dystopias. The unifying fantasy is that, in the long run, technological advance will endow man and/or AI with absolute power. In the utopias, men become gods and mankind conquers the galaxy; and in much more ambitious versions, the entire universe itself. (It would be orders of magnitude harder to reach other galaxies than other stars.) In the more common dystopias, man won’t be able to compete with AI, or the elite will but the commoners won’t (this is Harari’s version). In either case, the Valley’s Kool-Aid is that technology will revolutionize human life and endow some—depending on the narrative: Silicon Valley, tech firms, AIs, the rich, all humans, or AI and humans—with god-like powers. Needless to say, this technology will come out of Silicon Valley.

In reality, a small oligopoly of what Farhad Manjoo calls the Frightful Five (Facebook, Google, Apple, Microsoft and Amazon) have cornered unprecedented market power; and stashed their oligopolistic supernormal profits overseas, just to rub it in your face. Apple alone has an untaxed $216 billion parked offshore. Far from obeying the motto “data wants to be free,” these oligopolistic firms hoard your data and sell it to the highest bidder. The dream of tech start-ups is no longer a unicorn IPO. Rather, it is a buyout by one of the oligopolists. If you are a truly successful firm in the Valley, you have either benefited from network externalities (like the Frightful Five which are all platforms with natural economies of scale), or you have managed to shed costs onto the shoulders of people who would’ve hitherto been your employees or customers (like Airbnb, Uber and so on). Silicon Valley is, in fact, more neoliberal than Wall Street. While the Street has managed to shed risks and costs to the state, the Valley has managed to shed risks and costs to employees and customers. That’s basically the Valley’s business model.

Alongside its hoard of financial resources, the Valley has also cornered an impressive amount of goodwill in the popular consciousness. Who does not admire Google and Apple? This goodwill is the result of the industry’s actual accomplishments; some of them genuine, some thrust upon them by fate. In the popular imaginary, the Valley is the source of innovation and dynamism; to be celebrated not decried. Yet, the concentration of power in the industry has started to worry the best informed. If mass technological unemployment does come to pass, the Valley should not be surprised to find itself a pariah and a target of virulent populism, in the manner of Wall Street in 2009.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s