View this post on the web at [link removed]
The “doomer” narrative about artificial intelligence has sunk its claws into every corner of online discourse: The robots are coming to kill us all—or so the story goes. But before accepting the apocalypse as inevitable, ask yourself the following: Why exactly is this tale being told so often?
To unravel the “why,” we must first define the “what.” The doomer narrative envisions artificial intelligence (AI) as a runaway threat, an uncontrollable force destined to obliterate humanity. For most of us, “I, Robot [ [link removed] ]” was a sci-fi thriller; for doomers, it’s a blueprint of our coming demise.
Fact-Free Fearmongering
This fear didn’t materialize out of thin air. Its roots lie with the Singularity Institute, now called the Machine Intelligence Research Institute [ [link removed] ]. Once focused on advancing AI, the institute shifted gears to preventing what it saw as existential risks from intelligent machines. This pivot mirrored the evolution of its founder, Eliezer Yudkowsky, who transformed from an AI enthusiast into its fiercest critic. The more Yudkowsky studied the technology, the more paranoid he became. The more he ventured down the virtual rabbit hole, the more terrified he became. Today, his warnings resound with the conviction of a modern-day Cassandra—a curious blend of Alex Jones’ alarmism and Billy Graham’s fanaticism. Earlier this year, he somberly told The Guardian [ [link removed] ]that homicidal machines will soon slaughter everyone we love. He wasn’t joking. (Let’s hope he’s wrong.)
Yudkowsky’s feverish prophecies have shaped today’s tech discourse, influencing prominent figures like Sam Altman of OpenAI and Elon Musk. Both have adopted his apocalyptic tone, arguing for tighter controls over AI. With his quiet urgency, Altman speaks like a man standing on the brink of disaster, while Musk regularly decries AI’s existential dangers. Their shared dread stems from Yudkowsky’s questionable groundwork.
But fear has a cost: It clouds reasonable judgement. This doom-ridden rhetoric has fueled growing calls for government regulation, with proponents arguing that strict oversight of AI’s development is necessary to protect humanity. Yet this partnership between Big Tech and Big Government is hardly reassuring. Once a rebellious upstart, tech has morphed into an extension of state power, crafting policies [ [link removed] ] that encroach on personal and digital freedoms. [ [link removed] ] This results in an unsettling loss of autonomy as technology becomes a tool for control. Do not be fooled by promises of grand AI regulation. Big Government and Big Tech, the very entities championing these regulations, have long trampled on our rights with impunity. Why should we expect them to change their ways now?
I, Rational
Amid the relentless AI doomerism that dominates public discourse, there are some refreshing voices of reason—like Amjad Masad [ [link removed] ], the founder of Replit, an online platform designed to make coding accessible and collaborative for everyone. It allows users to write, test and share code in dozens of programming languages, all from their web browser. Essentially, it’s like a virtual coding workspace where you can work on projects alone or with others, without needing to set up complicated tools on your computer. Rather than fanning the flames of fear with sensationalist rhetoric, Masad offers thoughtful, clearheaded analyses of AI’s risks and rewards. As a key figure in the AI community, he challenges doomers with grounded insights.
He subscribes less to the “I, Robot” narrative and more to the “I, Rational” perspective—choosing logic and levelheadedness over apocalyptic theatrics. Masad’s journey to becoming one of Silicon Valley’s most influential voices is nothing short of extraordinary. Born and raised in Jordan, Masad grew up in a modest household far removed from the United States’ gleaming tech hubs. His early life was marked by curiosity and an unrelenting drive to learn, despite limited access to resources. As a child, he developed an interest in programming, often spending hours experimenting with code on outdated computers. His fascination with technology was more than a hobby; it became a lifeline, a way to imagine a life beyond the constraints of his environment.
Masad’s hunger for opportunity eventually led him to the United States, where he arrived just over a decade ago with little more than a dream and a remarkable work ethic. Immersing himself in the tech world, he quickly rose through the ranks, earning a reputation as a brilliant engineer and forward-thinking innovator. His big break came when he founded Replit—and what started as a passion project blossomed into a powerhouse company, empowering millions of users worldwide and earning Masad a spot among the tech elite. Today, Replit is worth well over $1 billion [ [link removed] ].
In many ways, Masad’s story is living proof that the American Dream, despite the naysayers, is far from dead. He arrived in a country that he believes stands apart from all others—a land where opportunity still thrives. “It’s truly the land of the free,” he says. “Right now, if you look at what’s happening in the U.K., where people are getting arrested for tweets and Facebook posts—it’s comical and almost unthinkable for an American.”
For Masad, America is more than a place for personal freedom to flourish; it’s where cutting-edge ideas become a reality. “America is also the world’s innovation engine. Since the Wright brothers flew their plane, America has been central to almost every major innovation,” he explains. Yet what defines the nation most is something intangible but undeniable: its boundless energy. “I think Americans are unique in their optimism; there’s a youthful energy to this country that’s hard to find elsewhere,” he notes.
A Peculiar Paradox
Masad explicitly rejects AI alarmism, urging a more balanced conversation. He points out that much of the fear stems from narrative convenience, not evidence. “The Singularity Institute was supposed to build AI,” he notes dryly, “but then the founder changed his mind and started selling a doom narrative. Maybe that seemed like an easier job.” Fear is profitable, to be sure, and today, the line between tech and politics has blurred to the point of erasure. Politicians exploit AI anxieties to push agendas, framing themselves as protectors against dystopian futures, while tech companies amplify those same fears to cement their grip on power and control.
Masad is skeptical of the idea that AI will become humanity’s executioner. “I’ve yet to see someone actually explain how AI will kill everyone,” he says. “Because it plays on Hollywood tropes, people take it seriously—even without real evidence or arguments.”
The paradox lies in how the irrational can sometimes appear rational. Humans are, after all, meaning-making machines, desperate to impose order on chaos. Myths, absurd as they may seem, are our tools for navigating the unknowable. It is both ridiculous and understandable that we cling to these narratives—our fear of uncertainty demands it. The dystopian trope thrives not because it’s well argued but because it satisfies a deep-seated need for a story, a framework to articulate our unease about technologies we barely comprehend.
Masad underscores AI’s transformative potential, highlighting its ability to revolutionize society in ways that uplift rather than destroy. “I had a chronic medical issue that was very hard to diagnose,” he explains, “and I spent an inordinate amount of hours and money trying to get an answer. Ultimately, I decided to write a little bit of software, collect all the data and tests, and run it through an AI to get the right diagnosis on the first try.”
In education, he envisions a future where AI-powered tutoring provides personalized learning for all, a game-changer in a world plagued by educational inequality. “For the first time,” Masad notes, “we have the technology to scale one-on-one tutoring to everyone in the world.” This could be game-changing, particularly in underserved regions where access to quality education is limited. Students struggling in traditional classrooms could receive tailored guidance, unlocking potential that might otherwise be overlooked.
However, with such a powerful technology, there are of course potential downsides. Overreliance on AI in education, for instance, could erode human connection, an essential component of teaching. Additionally, the data-driven nature of AI raises significant concerns about privacy and the commodification of student information. AI systems thrive on vast amounts of data to function effectively, often requiring access to sensitive details such as personal identifiers, learning habits and even behavioral patterns. In the education sector, this creates a troubling dynamic where student data can be harvested under the guise of personalization or progress tracking. This information, while initially intended to enhance learning experiences, is increasingly at risk of being repurposed or sold to third parties, turning students into products rather than beneficiaries.
Positivity Meets Pragmatism
While undeniably an AI optimist, Masad doesn’t shy away from acknowledging the risks. His perspective offers a much-needed dose of nuance in a debate often dominated by extremes. He doesn’t completely rule out disaster but shifts the focus to more plausible threats.
The worst-case scenario, he warns, isn’t a murderous machine uprising but a major cyberattack on critical infrastructure—one that AI could automate. He stresses the fact that this “could totally happen without AI, but AI could help it run autonomously and work like the worms that were infamous in the ’90s.” These worms—self-replicating pieces of malicious software—wreaked havoc by exploiting vulnerabilities to spread across networks with alarming speed. Famous examples like the Melissa [ [link removed] ] and ILOVEYOU [ [link removed] ] worms caused widespread damage, crashing systems, corrupting files and disrupting businesses globally.
Despite these risks, Masad remains hopeful, imagining a future where AI elevates humanity. “In ten years, every human will summon an expert on demand, raising the world’s collective IQ,” he predicts. For Masad, AI’s true promise lies in democratizing opportunity, breaking down barriers and enabling anyone with ideas and ambition to thrive.
While doomers warn of AI’s destructive power, Masad envisions it as the next great leap forward—a tool for progress in the hands of an optimistic, enterprising society. Perhaps he’s right. Perhaps the future isn’t a foregone descent into darkness but a choice waiting to be made. And perhaps, as Masad believes, America remains the nation most capable of making the right one.
The United States can do this by fostering a balanced approach to AI development and regulation—one that prioritizes innovation while safeguarding individual freedoms. This requires transparent policies, robust privacy protections and ethical frameworks that hold both government and private entities accountable. Moreover, the U.S. must invest in education to create a workforce equipped for an AI-driven economy, while encouraging public-private partnerships that emphasize collaboration over monopolistic control. Whether the U.S. actually makes the right decisions is, of course, a question only time will answer.
If you enjoyed this piece, please consider giving [ [link removed] ] to Discourse. Your contribution will help us to continue offering all readers, free of charge, the thoughtful and diverse content that you’ve come to love.
Unsubscribe [link removed]?