Wisdom in the age of AI
One hundred and thirty years ago, Rudyard Kipling wrote “If—” as a father's counsel to his son. It opens with a challenge:
If you can keep your head when all about you
Are losing theirs and blaming it on you,
If you can trust yourself when all men doubt you,
But make allowance for their doubting too;
The poem unfolds as a series of moral tests, each asking what it takes to remain steady when judgment is tested and certainty is in short supply.
“If—” has endured because it makes a demanding case about character. Kipling suggests that judgment is forged under pressure, and that integrity requires resisting easy currents.
By the poem's end, the reward Kipling offers is not success or recognition, but agency: the ability to move through the world without surrendering one’s judgment, to stay measured in extremes, and to use time deliberately rather than reactively.
If Kipling had written this poem today, how might it have been different?
Perhaps there would have been lines like:
- If you can turn from the glowing crowd and keep your own counsel.
- If you can lay the phone aside before sleep and quiet the storm it brings.
- If you can meet an artificial mind without surrendering the human one you carry.
Kipling’s first line, “If you can keep your head…”, is needed more than ever today. Algorithms are designed to addict us. Deepfakes are designed to manipulate us. New AI chatbots and companions are designed to make us emotionally dependent. How can we “keep our head” and stay grounded amidst the swirl and siren of the digital world?
To approach that question, we need to reopen a distinction that modern life often collapses: The difference between intelligence and wisdom. And in the final newsletter of 2025 (thank you for following along this year!), we’re taking a more reflective approach to explore what wisdom looks like in the age of AI.
// The race for intelligence
As AI has moved from the margins to the mainstream, the drive to embed intelligence everywhere has accelerated.
- Across the economy, AI is framed as an accelerant. Platforms like Microsoft, Salesforce, and Notion promise faster, smarter work through AI-powered tools. Millions now rely on chatbots to draft essays, analyze data, and deploy agents that compress time, reduce friction, and deliver instant answers.
- AI has the potential to drive research and scientific discovery. Applied to science and research, it can accelerate progress and lead to new discoveries.
- AI could transform education and care. “Intelligent” systems are heralded as a way to personalize learning, expand access to mental health support, and address isolation and loneliness at scale.
This surge of innovation is built upon the simple conviction that more intelligence is inherently better—that increasingly intelligent machines will turbocharge productivity, unlock economic growth, and usher in new forms of self-actualization. Smarter, faster, more powerful; the logic feels self-evident.
This simple premise has inspired new startups, led to pivots by the Magnificent Seven in Big Tech, and fundamentally altered the geopolitical landscape.
And now, geopolitics are inseparable from AI. Nations are seeking to establish digital sovereignty and control over their AI companies, chips, and supply chains. In the United States, President Trump has advanced a brand of state-influenced capitalism in which the government influences where, what, and how an AI company like Nvidia sells its chips.
AI companies are in fierce competition with one another, pushing new model releases, brandishing new features, and making big claims about the intelligence of their models. In 2025, Elon Musk said that Grok is “better than PhD level in everything.” Sam Altman wrote that “ChatGPT is already more powerful than any human who has ever lived.”
The latest generation of AI models is undeniably powerful. Capabilities that would have seemed implausible just a few years ago are now available to anyone with a browser.
But speed and scale are not the same as direction.
Amid the spectacle of rapid advances, a deeper set of questions surfaces. Is human flourishing the natural consequence of intelligence that is faster, stronger, and more expansive? Do technological progress and human well-being reliably rise together? Or has our fixation on intelligence crowded out something quieter, harder to measure, but still essential: wisdom?
// A turn to wisdom
Most definitions of wisdom treat it as distinct from intelligence. Where intelligence accumulates information and capability, wisdom concerns itself with the exercise of judgment, drawing on knowledge and experience in the service of all that is worthy and good.
At its core, wisdom is rooted in discernment. In a world saturated with information, ideas, and options, anchored at the core of wisdom is the ability to recognize what matters most, and to act deliberately in accordance with that understanding.
If intelligence had a shape, it might be spiky, soaring, fast. Wisdom’s shape is different: rounded, rooted, interconnected. To cultivate wisdom is not to add more information, but rather to enter a relationship with other human capacities: judgment shaped by experience and expertise, moral awareness, purpose, introspection, and humility in the face of uncertainty.
For thousands of years, thinkers have wrestled with wisdom:
- Confucius said, “By three methods we may learn wisdom: First, by reflection, which is noblest; second, by imitation, which is easiest; and third by experience, which is the bitterest.”
- Immanuel Kant said, “Science is organized knowledge. Wisdom is organized life.”
- bell hooks said, “Wisdom is not knowing everything; it is knowing what to do with what you know.”
In a recent newsletter, Nicolas Michaelsen, who has written about the birth of the “wisdom economy” in the age of AI, described the distinction like this: “Intelligence breaks problems into parts. Wisdom connects them. Intelligence seeks mastery over systems. Wisdom seeks harmony within them.”
// The wisdom gap
And yet, immersed in algorithmically-engineered feeds, endless content, and surveillance-based advertising, our capacity for discernment is steadily dulled. The very systems optimized to amplify intelligence and efficiency quietly erode the habits required for everyday wisdom.
The Center for Humane Technology (CHT), a Project Liberty Alliance member, describes this dynamic as “the Wisdom Gap”: a widening mismatch between the scale of the challenges we face and our capacity to respond thoughtfully to them. CHT argues that persuasive technology deepens this gap in two ways:
First, it thickens the informational environment. Deepfakes and other forms of
synthetic media make it increasingly difficult to distinguish fact from fiction.
Second, it decreases our ability to make sense of all that complexity because our technologies push us into narrower, more polarizing worldviews.
Hannah Arendt, a German and American historian and philosopher who became one of the 20th century’s most influential, argued in her 1961 anthology Between Past and Future that humans need to rediscover the capacity to think for themselves. Thinking was not inherited; it had to be actively practiced. To think, for Arendt, was not simply a private act. It was a condition of freedom itself.
One of the dangers of the AI era is not necessarily that machines may become intelligent, but that humans may become passive. As we fixate on measuring and marveling at AI, we risk neglecting the cultivation of our own judgment.
Arendt saw the erosion of independent thinking as a precondition for totalitarian power because when people stop exercising judgment, they begin outsourcing agency. Persuasive technologies may widen the gap between intelligence and wisdom, but they do not inherently rob a human of their ability to choose. A healthy society depends on citizens who continue to think, discern, and act for themselves.
To summon our wisdom in the age of AI, we might need to grant machines less power. Jaron Lanier, a contemporary computer scientist and computer philosopher, outlines a simple step:
“The most pragmatic position is to think of A.I. as a tool, not a creature…Mythologizing the technology only makes it more likely that we’ll fail to operate it well—and this kind of thinking limits our imaginations, tying them to yesterday’s dreams. We can work better under the assumption that there is no such thing as A.I. The sooner we understand this, the sooner we’ll start managing our new technology intelligently.”
The more we anthropomorphize and mythologize AI, the more power we give it. And the more we make AI the protagonist in our story, the more we lose practice in recognizing our own power and cultivating our own wisdom.
Norbert Wiener, the mathematician and philosopher who helped lay the foundations of modern computing, once warned that “the machine’s danger to society is not from the machine itself but from what man makes of it.”
That distinction matters. It shifts the question from what AI will become to how we choose to relate to it. Opportunities to exercise wisdom abound: deciding where intelligence should be applied, where it should be constrained, and where human judgment should not be outsourced.
In the age of AI, wisdom looks like resisting the temptation to mythologize machines or surrender authorship of our collective story. It means cultivating discernment in systems optimized for reaction, moral judgment in environments built for scale, and agency in a world where choice is increasingly automated.
It is a daunting question. But for those who can still “keep their head,” as Kipling put it, the age of AI is not only a test of intelligence, but a foundational ground for wisdom.