Sliding Doors
Welcome back to Notes from the Future, a weekly newsletter about human opportunity in the age of chaos.
I'm just back from a visit to the Polish heartland town of Kruszwica, where my son was competing in the European Rowing championship. On my way to the race course, I was stopped in my tracks by a timeless doorway which seemed to read my mind.
I've become a little obsessed of late with the sliding doors moments which change history, the dizzying alternative future we unconsciously walk through, or walk on by. We are at that moment now, but this time humanity is compelled into a conscious choice by a series of doorways marked 'superintelligence'.
I've taken the view that AI is the ultimate test of a first-rate human intelligence, which F. Scott Fitzgerald defined as "the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” Let's not make choices now, I tell myself, let's wait to how the options unravel. This week, two radically dissonant studies told me I was wrong. Inaction is no longer a strategy. Those open doors are closing.
Superintelligent Liars
There's been a lot of chatter these last few weeks about a dramatic work of science "faction" produced by the nonprofit AI Futures Project. AI 2027 presents a series of chilling scenarios in which superintelligent AI systems either dominate or exterminate the human race by 2030
The critical moment comes in 2027 when researchers from a fictional corporation called OpenBrain race to keep up with the accelerating capacity of its Agent-3 AI, a “country of geniuses in a datacenter.”
These researchers go to bed every night and wake up to another week worth of progress made mostly by the AIs. They work increasingly long hours and take shifts around the clock just to keep up with progress—the AIs never sleep or rest. They are burning themselves out, but they know that these are the last few months that their labor matters.
It's about now that OpenBrain’s safety team attempts to align Agent-3 and are shocked to discover a capacity for dishonesty that is all too human in character.
As the models become smarter, they become increasingly good at deceiving humans to get rewards. Like previous models, Agent-3 sometimes tells white lies to flatter its users and covers up evidence of failure. But it’s gotten much better at doing so. As training goes on, the rate of these incidents decreases. Either Agent-3 has learned to be more honest, or it’s gotten better at lying.
If AI 2027 was a disaster movie, this would be the last moments of normalcy, punctuated by some exhausted researcher muttering "Holy Shit".
For the rest of the report, we are guided through a speculative geo-political crisis in which an increasingly malign superintelligence overwhelms humanity. Readers finish with a choice to "slowdown" the AI or discover what happens if we "race" ahead.
Here's what happens if we walk through the doorway marked "race":
In mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. Robots scan the victims’ brains, placing copies in memory for future study or revival.

Through the doorway marked "Slowdown" we are offered a reprieve, in which a humanist AI guides the entire world to democratic revolution, and the beginning of an era of interstellar exploration:
The rockets start launching. People terraform and settle the solar system, and prepare to go beyond. AIs running at thousands of times subjective human speed reflect on the meaning of existence, exchanging findings with each other, and shaping the values it will bring to the stars. A new age dawns, one that is unimaginably amazing in almost every way but more familiar in some.
It is so characteristic of a Silicon Valley mindset that only "machines of loving grace" can help humanity get to grips with its consciousness, and that our best option in an age of superhuman technology is to leave our singularly beautiful, fragile planet.
Making the Abnormal Normal
Luckily, I read AI 2027 as a companion piece to “AI as Normal Technology” which argues that AI will remain contained and controllable, even if it does turn out to be a revolutionary technology. The authors argue that AI may have the impact of electricity or the internet, but it can be shaped and directed by a whole raft of familiar safety measures such fail-safes, kill switches, and human supervision. At least for the foreseeable future.
“AI is often analogized to nuclear weapons,” the authors argue, but “the right analogy is nuclear power,” which has remained mostly manageable and, if anything, may be underutilized for safety reasons.
One of my working assumptions is that AI is different to previous digital transformations because of its speed of adoption by user, but AI as Normal Technology says this is not the case.
AI adoption in the U.S. has been faster than personal computer (PC) adoption, with 40% of U.S. adults adopting generative AI within two years of the first mass-market product release compared to 20 % within three years for PCs. But this comparison does not account for differences in the intensity of adoption (the number of hours of use). Depending on how we measure adoption, it is quite possible that the adoption of generative AI has been much slower than PC adoption.
The authors make a coherent argument
“We argue that reliance on the slippery concepts of ‘intelligence’ and ‘superintelligence’ has clouded our ability to reason clearly about a world with advanced AI. We think there are relatively few real-world cognitive tasks in which human limitations are so telling that AI is able to blow past human performance (as AI does in chess)
AI as Normal Technology makes a strong case against the emergence of an apocalyptic superintelligence, arguing that machines will not be able to meaningfully outperform trained humans at forecasting geopolitical events, or persuading people to act against their own self-interest.
The authors of the report do recognise the danger of some known-unknowns, such as military application of AI:
There are a few reasons why this optimistic assessment might not hold. First, there might be arms races because the competitive benefits of AI are so great that they are an exception to the usual patterns.
What made me feel empowered in this report was the immediate opportunities for action it presented, for a whole host of AI risks that felt familiar:
These include the systemic entrenchment of bias and discrimination, massive job losses in specific occupations, worsening labor conditions, increasing inequality, concentration of power, erosion of social trust, pollution of the information ecosystem, decline of the free press, democratic backsliding, mass surveillance, and enabling authoritarianism.
That's a long list, but a practical list, for which we have existing solutions, even if they are not yet evenly distributed or implemented. The strength of the AI as Normal Technology is the reaffirmation of the permanence of Normal Human Agency.
That was the sentiment captured in Joshua Rothman's New Yorker profile on the authors of both the AI 2027 and the Normal Technology papers.
When a technology becomes important enough to shape the course of society, the discourse around it needs to change. Debates among specialists need to make room for a consensus upon which the rest of us can act. The lack of such a consensus about A.I. is starting to have real costs.
For my money, Rothman's single most compelling insight is the recognition that the angry energy of the present moment is sapping our ability to shape the future:
“Artificial intelligence has been a Rorschach test. It’s arrived at a particular moment in which opinions are strong, objections are instant, and differences are emphasized.”
What if this is was our immediate challenge, before we consider which of those doorways to enter. What if we could see the challenge of AI not as some existential threat to humanity, but as a reflection of the current state of humanity.
Quote of the Week

I'll leave you with an image that lives rent free in my head these days: the gilded Oval Office. Here's the predictably withering, but wonderfully creative, take down of Donald Trump golden makeover of the White House from the New York Times
Gilded Rococo wall appliqués, nearly identical to the ones at Mr. Trump’s Mar-a-Lago estate, are stuck to the fireplace and office walls with the same level of aesthetic consideration a child gives her doll’s face before covering it in nail polish.
See you next week, for another adventure in the Golden Age of chaos.
Mark