From Discourse Magazine <[email protected]>
Subject The Fifth Wave: “Open the Pod Bay Doors, ChatGPT”
Date January 17, 2024 11:01 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
View this post on the web at [link removed]

The firing and unfiring of Sam Altman as CEO of OpenAI was the kind of weird kabuki theater Silicon Valley occasionally engages in. Altman, president of startup nursery Y Combinator, is a titan of tech. OpenAI is the ambivalently nonprofit/for-profit outfit that gave us ChatGPT—which, as everyone who didn’t sleep through 2023 knows, is the artificial intelligence program that talks back to humans. It’s also the fastest-growing application in history.
The mission of OpenAI, of which Altman is a co-founder, is to develop safe AI—much more on that later. A majority of the company’s board evidently believed Altman wasn’t safety-minded enough, and on November 17 they fired him without warning. This quickly set in motion a bewildering series of moves and countermoves that ended four days later with Altman triumphantly resuming his job as boss, firing his opponents on the board and leaving the rest of us wondering what on earth had happened.
The struggle pitted those who believe AI is a terrifying threat to humanity (Altman and his allies) against those who believe it’s a super terrifying threat to humanity (the board). The Altman affair was a violent disagreement over a seemingly small difference of opinion. Why the ferocity? The New York Times believed the episode [ [link removed] ] had “exposed the cracks at the heart of the AI movement.” Mike Solana, keenest wit in tech, called it [ [link removed] ] “a knife fight in a clown car.”
One takeaway is that many brilliant experts look on AI as a sort of devouring monster. Others, equally qualified, are certain that it’s a shortcut to utopia. If either side is correct, the future of our species hinges on the question whether AI is a good witch or a bad witch. Given such cosmic uncertainty, it might be worthwhile to try to unpack that question.
Near-Termers Versus Long-Termers and the Flavors of AI Fear
The first observation to make about AI, is that it isn’t the newest thing under the sun. There have been several AI booms and AI “winters” or busts over the last four decades.
The second observation is that AI use today is already widespread in the corporate realm. According to a 2022 global survey [ [link removed] ] by IBM, “35 percent of companies reported using AI in their business, and an additional 42 percent reported they are exploring AI.” So we are talking about a technology that is hurtling past the early adopter phase.
We are in the middle of the biggest AI boom of all time. Billions for research are being poured out with uncommon abandon. Multiple companies are involved in this tech equivalent of the Cold War arms race, but the acknowledged leaders are Google and Microsoft, the latter being allied with and possibly taskmaster to OpenAI. Competition has stimulated the first controversial outcome: “generative” AI, which utilizes “deep learning” with minimal human intervention. Let’s say that again, for emphasis: The latest AI programs far exceed the computing capacity of mere mortals and have a diminishing need for us to learn new things.
For instance, Google’s AlphaGo trained by playing Go against itself and was soon inventing strategies no human had ever thought to use in the thousands of years this complex game has been played. This innovative power can be applied to any field, from drugs to weaponry. Concern about what horrors it might unleash in the wrong hands is not irrational.
In general, those worried about AI risk falling into two camps: near-termers, who are outraged by the effects of the technology right now, and long-termers, who share an apocalyptic vision of its ultimate consequences. Under the kabuki theater rules of high-tech controversies, both groups are as volubly hostile to each other as they are fearful of AI.
Near-termers are agitated about economic dislocation and social justice grievance. They believe AI will destroy millions of jobs and have a discriminatory effect on minorities and marginalized groups. There’s probably some validity to this calculation. From the lungfish to Amazon, early adopters have tended to leave the less well-positioned far behind. The near-termers’ demand is for AI development to be tightly controlled or possibly stopped, unless equitable outcomes can be assured. This impulse rapidly elides out of technology and into politics.
Long-termers—sometimes derided as doomers or decels (for “decelerationists”)—are haunted by the HAL-9000 scenario. In the Stanley Kubrick film, “2001: A Space Odyssey,” HAL, the AI computer in charge of a spaceship’s functions, goes criminally insane. One dramatic scene has astronaut David Bowman attempting to reenter the ship after retrieving the body of a murdered comrade. “Open the pod bay doors, HAL,” he orders. To which the computer’s chillingly mellow yet emotionless voice responds, “I’m sorry, Dave, but I’m afraid I can’t do that.”
The long-termers essentially expand that scene to every future human-computer interaction. The AI platforms, vastly more intelligent than we can even imagine, will somehow develop goals misaligned with ours in ways that lead to the extinction of the human race. In Hollywood terms, “2001” leads directly to “Terminator” and “The Matrix”: a world ruled by killer machines.
Genocidal Paperclips and Other Doomer Anxieties
What strikes panic in the hearts of the doomers is artificial general intelligence, or AGI, the coming of a nonhuman mind far above human capacity. AGI does not yet exist but is a feasible end state for a self-learning supercomputer. What’s more, AGI could be said to be the Holy Grail sought by all the corporate billions spent in research.
This past March, an open letter [ [link removed] ] titled “Pause All Giant AI Experiments” received 33,000 signatures, including those of Elon Musk, Steve Wozniak and many other tech notables. The signers worried that “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” They went on to ask, in a way that left no doubt about the answer, “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” The signers concluded that “such decisions must not be left to unelected tech leaders.”
The letter called for a six-month pause in all AI training for systems exceeding GPT-4—the “large language model” which powers the most advanced version of ChatGPT—and proposed a number of regulations and policies to consider during this pause. Nine months and a great deal of noise and controversy later, nothing has happened.
Others are much more aggressive in their alarmism. Over the entire AI debate hovers the strange figure of Eliezer Yudkowsky, prophet of the online “rationalist” sect, who is convinced with better than 90% certitude [ [link removed] ] that AGI will bring about the end of our species. This makes it a much greater threat than, say, mere nuclear devastation. The nations of the Earth, Yudkowsky has proclaimed [ [link removed] ], should be “willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.” Fortunately, nothing has come of this proposal either.
For those who wonder what form, exactly, AI doomsday will take, thought experiments have been concocted that delight in mapping out the grim possibilities. Yudkowsky has imagined [ [link removed] ] a “sufficiently powerful intelligence” getting access to the web, emailing DNA sequences to the appropriate firms, then erecting “nanomachinery” that will build “diamondoid bacteria” to be shot by “miniature rockets” into the atmosphere and from there “into human bloodstreams,” where it will lie dormant until it strikes “on a timer.” It’s that easy.
Putting aside the question of plausibility, scenarios like Yudkowsky’s assume that AGI will acquire autonomous agency—that it will develop internal goals. That’s tricky philosophical ground. Human agency is perplexing enough—after all, David Hume called the self “a bundle of perceptions,” which isn’t that different from what AGI would bring to the table. Could a bundle of code commands suddenly “wake up”? Was HAL awake? In terms of consciousness, almost certainly not [ [link removed] ]. What can we say about agency? Well, ChatGPT is innocent of goals beyond predicting the next “token”—or what the programming “decides” follows logically from previous content. Should a future HAL achieve transcendent AGI status, it may attain the appearance of autonomous agency—the reality can be left for philosophers to decide.
In the doomers’ judgment, AGI can wreak destruction simply by carrying out its commands. The “paperclip maximizer” thought experiment involves an AI that has been programmed with one goal: to make paperclips. This AI wants to maximize the number of paperclips in existence through any means possible, and thus strives to convert all matter in the universe into paperclips. Since humans might turn the AI off, and this will interfere with the mission, killing all humans aligns with the goal of producing paperclips.
This story isn’t meant to be taken literally. It’s merely a demonstration of how a superintelligent AGI with any goal, no matter how seemingly harmless, might become an existential risk. A “sufficiently powerful” version of AlphaGo, for instance, could achieve mastery in Go by assassinating every possible competitor.
The World as a Voice and as an Oracle
In both its natural and cultural forms, the world, British philosopher Andy Clark [ [link removed] ] has written [ [link removed] ], acts as a “scaffolding” for human intelligence. We have offloaded some portion of our intellect onto objects—from traffic lights to books—and the objects speak to us and guide us. Part of the coherence of the human self, that opaque bundle of perceptions, is imposed from outside our skins by material reality.
With AI, the world of knowledge, both natural and cultural, has come alive as an integrated presence endowed with a single voice. We have been returned to the age of oracles, when the human race seeks guidance by querying, person to person, the mysterious forces—natural, cultural—that control our fate, and that often mock our needs by responding with cruel or capricious answers. The scaffolding has become the Totality of the known: It’s no wonder that so many feel uncertain about how to deal with such a turn of events.
But there’s no reason to panic: AI will not cause human extinction, nor will it kill millions against the wishes of everyone involved. The bizarrely complex doomsday scenarios assume that people will be too passive or too stupid to see disaster approaching at any stage of the scheme. That gets our species exactly wrong. AI will be dangerous because it will be used by dangerous persons to do dangerous things. Hackers, scammers, terrorists and other bad actors will exploit it for all sorts of unsavory purposes. But the necessary countermeasures will be taken—in many cases, they are already being taken. Precisely because we are symbolic animals, able to invent something as astonishing as ChatGPT, we will see any danger coming and we will act to minimize or eliminate it. And if some future nonhuman mega-brain tries to turn us into paperclips, we’ll just do what David Bowman did to HAL in “2001”—turn the damn thing off.
Yudkowsky thinks there’s a greater than 90% chance AI will lead to extinction. Other rationalists [ [link removed] ] give less dire estimates, as low as 2%. We think the chance is approximately 0, rounded to the nearest millionth of a percent. There are eight billion humans distributed over six continents, touching all corners of the earth. Ours is a sturdy and resilient species. When AI alarmists so casually wave the flag of “human extinction,” it becomes hard to take them seriously when they warn of less improbable disasters.
And even these lesser disasters are likely to be averted. Doomers dwell on scenarios like the 1983 film “WarGames” (to return to Hollywood analogies), in which a rogue AI nearly launches nuclear missiles despite human opposition. This would require a large number of important people to give up a massive amount of power to an entity that—as the March open letter observes—they can’t control and don’t fully understand. Here in reality, people already get spooked by a chatbot. There’s no constituency for making ChatGPT a nuclear power. Any AI that even hints of wishing to build deadly nanomachines will be throttled in its cradle by corporate hands driven by fear of a controversy that would destroy Silicon Valley.
There is, however, a nontrivial chance that we will take the countermeasures too far. Nobody should want to organize society on the model of the American airport. How many “guardrails” will we put down to protect against an AI terror attack? How many gates, passports and authentications will we need to carry out the most basic transactions? Should we ban email entirely, just in case? How about the web? This is not to say that safety concerns shouldn’t be taken seriously—but if we’re driven by fear, preemptive measures against imagined threats could easily do more harm than good.
The avowed goal of OpenAI is to civilize AI—to train it away from bigotry and violence. Crude bigotry is easy to detect and train against, but at a certain point the training becomes a matter of judgment—of perspective. The best-meaning attempts to program out insensitivity could conceivably end by programing in something that resembles political surveillance. It isn’t hard to imagine a Big Brother iteration of HAL saying with that bland voice, “I’m sorry, Dave, but I’m afraid I can’t allow you to talk like that.”
The Singularity and the Opening of the Next Frontier
One school of thought maintains that AI will bring about the end of history: the Singularity. What does that mean? For Yudkowsky, predictably, the Singularity is just another word for doomsday. For more optimistic dreamers like futurist Ray Kurzweil [ [link removed] ], however, it’s a sort of ascension: the magical hour in which ordinary persons at last will be, quite literally, like the immortal gods. In an original take [ [link removed] ], media scholar Andrey Mir predicts the Singularity will fuse humanity with AI until the medium becomes not only the message but the messenger. About these projections, nothing useful can be said beyond “Let’s wait and see.”
One need not be a millenarian to feel a certain optimism about the uses of AI. In the future, each of us will possess, as mentor and guide, an oracle that will dispense, when needed, the best and wisest of human culture. Paradigm breakthroughs in every field—health, education, entertainment, space travel—are possible and even likely. Frequent confrontation with the voice of Totality will keep us on our mettle. The highest skills in an AI-driven civilization will be the framing of productive questions and the parsing of flawed or ambivalent answers.
Everyday life will be changed to such an extent that “Before AI” and “After AI” may become commonplace markers of historical time. Here is Arnold Kling, offering a glimpse [ [link removed] ] of what to expect:
Imagine what will happen when these AIs are connected to physical tools. ... You could be a farmer who never has to go into the fields—just explain to the robots what you want. Or you could be a chef who gives directions without being in the kitchen. Or a scientist who conducts experiments without having to spend all day in the lab.
The marvel is that all of these commands, which bring to mind the sorcerer’s apprentice, are conducted “using ordinary language,” observes Kling.
Americans excel at re-creating the physical frontier that, historians tell us [ [link removed] ], had closed by the year 1890. We summon it back to life again and again. The internet was once such a frontier. AI is one now: a vast, undefined space waiting for the hordes of settlers to arrive. It’s a wild, unruly place with little law west of ChatGPT. Absolutely no one—not even the smartest rationalist—has a clue about what will happen, for good or evil, once the pioneering throngs show up.
Risks abound but they will be minded. In a culture so fixated on protecting from risk and “harm,” the concern, rather, is that these will be used as a pretext to shut down the frontier and herd the adventurers and the innovators into closely watched pens. The government will of course tell us that it’s for our own safety. Behemoths like Google and Microsoft will mutter pieties while locking out the competition.
AI is too big with world-historical consequences for such small-minded games. The web was parceled out among digital oligarchs. That mistake should not be repeated. Openness to anyone with a good idea and the capacity to experiment with multiple applications must be the leading attribute of our AI frontier.

Unsubscribe [link removed]?
Screenshot of the email generated on import

Message Analysis