In a viral video produced last month by The Babylon Bee, two college students are amazed by the accuracy of the answers they receive from Chat GPT. In the next scene, however, we learn the secret to the technology’s insight: Chat GPT is revealed as a harried computer geek, racing to answer the entire world’s queries from a room in headquarters’ basement.
|
|
In the end, while these guests may see themselves as standing on either side of a gaping divide, there is nonetheless a goal on which they can all agree: Determining—and then preserving—what distinguishes us from AI.
|
|
|
The video is intended as a satire, and it is nothing if not funny. (“Michael, your response to that lady in Singapore took 0.8 milliseconds too long,” says the company boss, to which Michael screams, “Well, I’m sorry, but I am going as fast as I can.”) Yet behind the comedy and cliches, there lurks a more serious point: Namely, we have yet to grasp this new AI paradigm. Indeed, and despite all evidence to the contrary, we persist in locating intelligence in a human context.
And to several guests in a recent spate of EconTalk episodes about AI, it is precisely that naivete that will ultimately be our downfall.
For both Erik Hoel and Eliezer Yudkowsky, history is but a long cautionary tale about what happens when intelligence is left to its own devices—or rather, when it’s left to evolve. Hoel, a neuroscientist and author, explains in his conversation about AI’s threat to humanity that when—not if—AI models learn to do everything humans can, the most likely scenario will recall our own disposal of the less-smart Neanderthals. Yudkowsky, meanwhile, looks back to our days on the savannahs and finds an equally frightening takeaway: Given enough time, all intelligence optimized for specific outcomes end up doing strange things (see, for example, humanity’s moon landings and addiction to fattening ice cream). Sentience, in their telling, is completely beyond the point. Whether the neural networks are real or artificial, they will invariably behave in ways we should be actively working to prevent. (Which, as Swedish philosopher Nick Bostrum insisted back in 2014, will in any case be impossible to do.)
Then again, to other guests, these are foolish predictions—with all the redundancy that the phrase implies). Kevin Kelly, for example, the founding editor of Wired, believes that the panic about AI is just scarcity mindset in disguise. Instead of worrying about which professions and endeavors will end up on the chopping block, he says, we should recognize that technological advancements generally make for a bigger pie. Columnist and AI optimist Tyler Cowen is more than willing to heed his advice, having concluded that AI offers enough potential benefits to justify its imperfections. Better, he says, to focus on improving our regulatory requirements than to insist on regulating the technology.
In the end, while these guests may see themselves as standing on either side of a gaping divide, there is nonetheless a goal on which they can all agree: Determining—and then preserving—what distinguishes us from AI.
In his conversation with Cowen, Russ describes writing a poem about a crying baby on a plane. Then, he asked Chat GPT to produce its own version—which wasn’t, he concluded, half bad. But could AI ever produce a poem as moving as the one by Dana Gioia about the untimely death of his cousin? Or one as alliteratively awesome as Zach Weinersmith’s riff on Beowulf? This is, in fact, the point that author Ian Leslie raises in his discussion of the “robot aesthetic,” or how humans’ imitation of computers is chipping away at our singularity. Could it be that the human enterprise hinges on our stubborn—or even unconscious—instinct to create a new metaphor or sing in an unexpected key?
Maybe.
It may be, too, that time is running out for us humans—if not for our survival, then for our creativity. Today, however, some of us still feel the instinct to create moving poetry. Until computers feel a similar need, there may yet be hope for humanity.
Marla Braverman, editor at EconTalk
Â
|
|
Mining the Conversation
Past EconTalk episodes that relate to the subject of AI and its impact on the future of humanity:
|
|
Rodney Brooks on Artificial Intelligence: Just as we’ve gotten over our Hoel and Yudkowsky-induced panic, the emeritus professor at MIT points out that in the history of technology, we have consistently under-estimated the way it will change our lives. From driverless cars’ transformation of cities to indoor farming’s disruption of our system of food supply, technological innovation will remake our world like magic—which, Brooks argues, it kind of is.
|
|
Gary Marcus on the Future of Artificial Intelligence and the Brain: Seven years ago, the New York University professor emeritus of neural science and psychology reassured us that truly transformative AI is still a long way away—long enough that we can we raise standards in programming to preclude catastrophic mistakes. Sense a theme? He also discussed the need, when dozens of industries disappear, for us to separate work from how we make meaning, and to accept that many people’s live will be lived most fully in the virtual sphere.
|
|
Sam Altman on Startups, Venture Capital, and the Y Combinator: While most of this 2014 conversation with the CEO of OpenAI focused on his then-startup accelerator’s strategy for discovering and launching disruptive companies, halfway through, Altman discusses his thinking about the reason it’s so hard for us to conceptualize new platforms. (Hint: the things of which they’re made do not yet exist, and the reason that we need them—we haven’t thought about yet.)
|
|
Conversation Starters
An eclectic selection of books, films, and podcasts for enhancing your own conversations on the topic.
|
|
2001: A Space Odyssey. The classic film by Stanley Kubrick featured one of science fiction’s most iconic representations of AI of all time, more than half a century before the launch of Chat GPT. Known for a voice that is equal parts soothing and unsettling, Hal 9000, the sentient artificial general intelligence computer that controls a mission to Jupiter, is capable of emotion, logical decision-making, and self-awareness. He is also, as the crew discovers, capable of developing human-like traits. When his jealousy and anger give way to homicidal rage, he offers a critical lesson for those who would mix AI and space flight: Namely, always make sure that the computer can’t read lips.
|
|
Superintelligence: Paths, Dangers, Strategies by Nick Bostrum. In a warning to all those who would act first and work out the details later, Nick Bostrum’s 2014 book—like his EconTalk episode from that same year—argues that if machine brains surpass human brains in general intelligence, their ability to learn at a much faster rate than us will make them nearly impossible to control—never mind, like Hal 9000, to turn off. The only way, he explains, to prevent existential catastrophe is to instill the first superintelligence with goals compatible with human well-being. And the only problem with this solution is that it’s nearly impossible to do.
|
|
Machines Like Me by Ian McEwan. After squandering his inheritance on the purchase of Adam, one of the first batch of artificial humans to debut in 1980s dystopian London, Charlie offers to let his love interest, Miranda, program half his personality. Yet as Adam collects experiences through social interactions and research, his consciousness develops a moral code that threatens his parents’ relationship, their dreams, even their liberty. Read till the end for a cameo appearance by Alan Turing, the British mathematician considered the father of AI.
|
|
God, Human, Animal, Machine by Meghan O’Gieblyn. You can take the girl out of Bible school, but you can’t take the search for God out of the girl. At least, that search for God that takes the form of questions about free will, the mind’s relationship to the body, and the possibility of immortality. Once upon a time, O’Gieblyn explains in this collection of personal essays, these challenges were taken up by philosophers and theologians; now, they’re engineering problems solved by digital technologies. Whether the answers are as satisfying, however, you’ll have to read to find out.
|
|
Sandra, a podcast mini-series by Gimlet Media. Looking to shed her small-town existence through a job at a mysterious tech company, Helen becomes the voice in smart phones that answers avian queries. While at first delighted by her presence in other people’s lives, Helen quickly becomes disillusioned by their learned helplessness, and starts to wonder what our increasing outsourcing of thinking means for the future of humanity. As do we all—don’t we, Siri?
|
|
Most Talked About
What was the most listened-to EconTalk episode of the first quarter of 2023? The downloads have declared a winner: neuroscientist and philosopher Sam Harris on the problem with trying to meditate one’s way out of a problem, what meditation really reveals, and why the act of seeking happiness is to overlook what we’re looking for.
LISTEN NOW
|
|
|
Winding Up
Upcoming EconTalk guests to listen out for include:
Daron Acemoglu on AI and social inequality
Eric Hoel on free will
Zvi Mowshowitz an AI and the Dial of Porgress
Adam Mastroinanni on why your brain isn't connected to your ears
Â
|
|
|
|
|