From xxxxxx <[email protected]>
Subject The Age of De-Skilling
Date October 28, 2025 12:05 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[[link removed]]

THE AGE OF DE-SKILLING  
[[link removed]]


 

Kwame Anthony Appiah
October 26, 2025
The Atlantic
[[link removed]]


*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

_ Will AI stretch our minds—or stunt them? _

,

 

The fretting has swelled from a murmur to a clamor, all variations on
the same foreboding theme: “Your Brain on ChatGPT
[[link removed]].”
“AI Is Making You Dumber
[[link removed]].”
“AI Is Killing Critical Thinking
[[link removed]].”
Once, the fear was of a runaway intelligence that would wipe us out,
maybe while turning the planet into a paper-clip factory. Now that
chatbots are going the way of Google—moving from the miraculous to
the taken-for-granted—the anxiety has shifted, too, from apocalypse
to atrophy. Teachers, especially, say they’re beginning to see the
rot. The term for it is unlovely but not inapt: _de-skilling_.

The worry is far from fanciful. Kids who turn to Gemini to
summarize _Twelfth Night_ may never learn to wrestle with
Shakespeare on their own. Aspiring lawyers who use Harvey AI for legal
analysis may fail to develop the interpretive muscle their
predecessors took for granted. In a recent study, several hundred U.K.
participants were given a standard critical-thinking test and were
interviewed about their AI use for finding information or making
decisions. Younger users leaned more on the technology, and scored
lower on the test. _Use it or lose it_ was the basic takeaway.
Another study looked at physicians performing colonoscopies: After
three months of using an AI system to help flag polyps, they became
less adept at spotting them unaided.

But the real puzzle isn’t whether de-skilling exists—it plainly
does—but rather what kind of thing it is. Are all forms of
de-skilling corrosive? Or are there kinds that we can live with, that
might even be welcome? _De-skilling _is a catchall term for losses
of very different kinds: some costly, some trivial, some oddly
generative. To grasp what’s at stake, we have to look closely at the
ways that skill frays, fades, or mutates when new technologies arrive.

Our chatbots are new: The “transformer” architecture they rely on
was invented in 2017, and ChatGPT made its public debut just five
years later. But the fear that a new technology might blunt the mind
is ancient. In the _Phaedrus_, which dates to the fourth century
B.C.E., Socrates recounts a myth in which the Egyptian god Thoth
offers King Thamus the gift of writing—“a recipe for memory and
for wisdom.” Thamus is unmoved. Writing, he warns, will do the
opposite: It will breed forgetfulness, letting people trade the labor
of recollection for marks on papyrus, mistaking the appearance of
understanding for the thing itself. Socrates sides with Thamus.
Written words, he complains, never answer your particular questions;
reply to everyone the same way, sage and fool alike; and are helpless
when they’re misunderstood.

Of course, the reason we know all this—the reason the episode keeps
turning up in Whiggish histories of technology—is that Plato wrote
it down. Yet the critics of writing weren’t entirely wrong. In oral
cultures, bards carried epics in their heads; griots could reel off
centuries of genealogy on demand. Writing made such prowess
unnecessary. You could now take in ideas without wrestling with them.
Dialogue demands replies: clarification, objection, revision.
(Sometimes “Very true, Socrates” did the trick, but still.)
Reading, by contrast, lets you bask in another’s brilliance, nodding
along without ever testing yourself against it.

What looks like a loss from one angle, though, can look like a gain
from another. Writing opened new mental territories: commentary,
jurisprudence, reliable history, science. Walter J. Ong, the scholar
of orality and literacy, put it crisply: “Writing is a technology
that restructures thought.” The pattern is familiar. When sailors
began using sextants, they left behind the seafarer’s skycraft, the
detailed reading of stars that once steered them safely home. Later,
satellite navigation brought an end to sextant skills. Owning a Model
T once meant moonlighting as a mechanic—knowing how to patch tubes,
set ignition timing by ear, coax the car’s engine back to life after
a stall. Today’s highly reliable engines seal off their secrets.
Slide rules yielded to calculators, calculators to computers. Each
time, individual virtuosity waned, but overall performance advanced.

It’s a reassuring pattern—something let go, something else
acquired. But some gains come with deeper costs. They unsettle not
only what people can do but also who they feel themselves to be.

In the 1980s, the social psychologist Shoshana Zuboff spent time at
pulp mills in the southern United States as they shifted from manual
to computerized control. Operators who had once judged pulp by touch
(“Is it slick? Is it sticky?”) now sat in air-conditioned rooms
watching numbers scroll across screens, their old skills unexercised
and unvalued. “Doing my job through the computer, it feels
different,” one told Zuboff. “It is like you’re riding a big,
powerful horse, but someone is sitting behind you on the saddle
holding the reins.” The new system was faster, cleaner, safer; it
also drained the work of its meaning.

The sociologist Richard Sennett recorded a similar transformation at a
Boston bakery. In the 1970s, the workers there were Greek men who used
their noses and eyes to judge when the bread was ready and took pride
in their craft; in the 1990s, their successors interacted with a touch
screen on a Windows-style controller. Bread became a screen icon—its
color inferred from data, its variety chosen from a digital menu. The
thinning of skills brought a thinning of identity. The bread was still
good, but the kitchen workers knew they weren’t really bakers
anymore. One told Sennett, half-joking, “Baking, shoemaking,
printing—you name it, I’ve got the skills.” She meant that she
didn’t really need any.

The cultural realm, certainly, has had a long retreat from touch. In
the middle-class homes of 19th-century Europe, to love music usually
meant to _play_ it. Symphonies reached the parlor not by stereo but
by piano reduction—four hands, one keyboard, Brahms’s _Symphony
No. 1_ conjured as best the household could manage. It took skill:
reading notation, mastering technique, evoking an orchestra through
your fingers. To hear the music you wanted, you had to practice.

Then the gramophone took off, and the parlor pianos started to gather
dust. The gains were obvious: You could summon the orchestra itself
into your living room, expand your ear from salon trifles to Debussy,
Strauss, Sibelius. The modern music lover may have been less of a
performer but, in a sense, more of a listener. Still, breadth came at
the expense of depth. Practicing a piece left you with an intimate
feel for its seams and contours. Did your kid with the shiny Victrola
get that?

That sense of estrangement—of being a step removed from the real
thing—shows up whenever a powerful new tool arrives. The slide rule,
starting in the 17th century, reduced the need for expertise at mental
math; centuries later, the pocket calculator stirred unease among some
engineers, who feared the fading of number sense. Such worries
weren’t groundless. Pressing “Cos” on a keypad got you a number,
but the meaning behind it could slip away. Even in more rarefied
precincts, the worry persisted. The MIT physicist Victor Weisskopf
was troubled
[[link removed]] by
his colleagues’ growing reliance on computer simulations. “The
computer understands the answer,” he told them when they handed him
their printouts, “but I don’t think you understand the answer.”
It was the disquiet of an Egyptian king, digital edition, convinced
that output was being mistaken for insight.

In what Zuboff called “the age of the smart machine,” automation
was mainly confined to the workplace—the mill, the industrial
bakery, the cockpit. In the age of the PC and then the web, technology
escaped into the home, becoming general purpose, woven into everyday
life. By the 2000s, researchers were already asking what search
engines were doing to us. You’d see headlines such as “This Is
Your Brain on Google
[[link removed]].”
Although the panic was overplayed, some effects were real. A widely
cited study
[[link removed]] found
that, in certain circumstances, people would remember _where_ a fact
could be found rather than the fact itself.

In truth, human cognition has always leaked beyond the skull—into
instruments, symbols, and one another. (Think of the couples you know:
One person remembers birthdays, the other where the passports live.)
From the time of tally bones and to the era of clay tablets, we’ve
been storing thought in the world for tens of millennia. Plenty of
creatures use tools, but their know-how dies with them; ours
accumulates as culture—a relay system for intelligence. We inherit
it, extend it, and build upon it, so that each generation can climb
higher than the last: moving from pressure-flaked blades to bone
needles, to printing presses, to quantum computing. This compounding
of insight—externalized, preserved, shared—is what sets _Homo
sapiens_ apart. Bonobos live in the ecological present. We live in
history.

Accumulation, meanwhile, has a critical consequence: It drives
specialization. As knowledge expands, it no longer resides equally in
every head. In small bands, anyone could track game, gather plants,
and make fire. But as societies scaled up after the agrarian
revolution, crafts and guilds proliferated—toolmakers who could
forge an edge that held, masons who knew how to keep a vault from
collapsing, glassblowers who refined closely guarded recipes and
techniques. Skills once lodged in the body moved into tools and rose
into institutions. Over time, the division of labor became,
inevitably, a division of _cognitive_ labor.

The philosopher Hilary Putnam once remarked that he could use the
word _elm_ even though he couldn’t tell an elm from a beech.
Reference is social: You can talk about elms because others in your
language community—botanists, gardeners, foresters—can identify
them. What’s true of language is true of knowledge. Human capability
resides not solely in individuals but in the networks they form, each
of us depending on others to fill in what we can’t supply ourselves.
Scale turned social exchange into systemic interdependence.

The result is a world in which, in a classic example
[[link removed]], nobody knows how to make a
pencil. An individual would need the skills of foresters, saw millers,
miners, chemists, lacquerers—an invisible network of crafts behind
even the simplest object. Mark Twain, in _A Connecticut Yankee in
King Arthur’s Court_, imagined a 19th-century engineer dropped into
Camelot dazzling his hosts with modern wonders. Readers went with it.
But drop his 21st-century counterpart into the same setting, and
he’d be helpless. Manufacture insulated wire? Mix a batch of
dynamite? Build a telegraph from scratch? Most of us would be stymied
once we failed to get onto the Wi-Fi.

The cognitive division of labor is now so advanced that two
physicists may barely understand each other—one modeling dark
matter, the other building quantum sensors. Scientific mastery now
means knowing more and more about less and less. This concentration
yields astonishing progress, but it also means grasping how limited
our competence is: Specialists inherit conceptual tools they can use
but can no longer make. Even mathematics, long romanticized as the
realm of the solitary genius, now works like this. When Andrew Wiles
proved Fermat’s Last Theorem, he didn’t re-derive every lemma
himself; he assembled results that he trusted but didn’t personally
reproduce, building a structure he could see whole even if he hadn’t
cut each beam.

The widening of collaboration has changed what it means to know
something. Knowledge, once imagined as a possession, has become a
relation—a matter of how well we can locate, interpret, and
synthesize what others know. We live inside a web of distributed
intelligence, dependent on specialists, databases, and instruments to
extend our reach. The scale tells the story: The _Nature_ paper that
announced the structure of DNA had two authors; a _Nature_ paper in
genomics today might have 40. The two papers announcing the Higgs
boson? Thousands. Big science is big for a reason. It was only a
matter of time before the network acquired a new participant—one
that could not just store information but imitate understanding
itself.

The old distinction between information and skill, between
“knowing _that_” and “knowing _how_,” has grown blurry in
the era of large language models. In one sense, these models are
static: a frozen matrix of weights you could download to your laptop.
In another, they’re dynamic; once running, they generate responses
on the fly. They do what Socrates complained writing could not: They
answer questions, adjust to an interlocutor, carry on a conversation.
(Sometimes even with themselves; when they feed their own outputs back
as inputs, AI researchers call it “reasoning.”) It wasn’t hard
to imagine Google as an extension of memory; a large language model
feels, to many, more like a stand-in for the mind itself. In
harnessing new forms of artificial intelligence, is our own
intelligence being amplified—or is it the artificial kind that, on
little cat feet, is coming into its own?

We can’t put the genie back in the bottle; we _can_ decide what
spells to have it cast. When people talk about de-skilling, they
usually picture an individual who’s lost a knack for something—the
pilot whose hand-flying gets rusty, the doctor who misses tumors
without an AI assist. But most modern work is collaborative, and the
arrival of AI hasn’t changed that. The issue isn’t how humans
compare to bots but how humans who use bots compare to those who
don’t.

Some people fear that reliance on AI will make us worse in ways that
will swamp its promised benefits. Whereas Dario Amodei, the CEO of
Anthropic, sanguinely imagines a “country of geniuses,” they
foresee a country of idiots. It’s an echo of the old debate over
“risk compensation”: Add seatbelts or antilock brakes, some social
scientists argued a few decades ago, and people will simply drive more
recklessly, their tech-boosted confidence leading them to spend the
safety margin. Research eventually showed a more encouraging result:
People do adjust, but only partially, so that substantial benefits
remain.

Something similar has seemed to hold for the clinical use of AI, which
has been common in hospitals for more than a decade. Think back to
that colonoscopy study: After performing AI-assisted procedures,
gastroenterologists saw their unaided rate of polyp detection drop by
six percentage points. But when another study
[[link removed](24)03471-0/fulltext] pooled
data from 24,000 patients, a fuller picture emerged: AI assistance
raised overall detection rates by roughly 20 percent. (The AI here was
an expert system—a narrow, reliable form of machine learning, not
the generative kind that powers chatbots.) Because higher detection
rates mean fewer missed cancers, this “centaur” approach was
plainly beneficial, regardless of whether individual clinicians became
fractionally less sharp. If the collaboration is saving lives,
gastroenterologists would be irresponsible to insist on flying solo
out of pride.

In other domains, the more skillful the person, the more skillful the
collaboration—or so some recent studies suggest. One of them found
that humans outperformed bots when sorting images of two kinds of
wrens and two kinds of woodpeckers. But when the task was spotting
fake hotel reviews, the bots won. (Game recognizes game, I guess.)
Then the researchers paired people with the bots, letting the humans
make judgments informed by the machine’s suggestions. The outcome
depended on the task. Where human intuition was weak, as with the
hotel reviews, people second-guessed the bot too much and dragged the
results down. Where their intuitions were good, they seemed to work in
concert with the machine, trusting their own judgment when they were
sure of it and realizing when the system had caught something they’d
missed. With the birds, the duo of human and bot beat either alone.

The same logic holds elsewhere: Once a machine enters the workflow,
mastery may shift from production to appraisal. A 2024 study of coders
using GitHub Copilot found that AI use seemed to redirect human skill
rather than obviate it. Coders spent less time generating code and
more time assessing it—checking for logic errors, catching edge
cases, cleaning up the script. The skill migrated from composition to
supervision.

That, more and more, is what “humans in the loop” has to mean.
Expertise shifts from producing the first draft to editing it, from
speed to judgment. Generative AI is a probabilistic system, not a
deterministic one; it returns likelihoods, not truth. When the stakes
are real, skilled human agents have to remain accountable for the
call—noticing when the model has drifted from reality, and treating
its output as a hypothesis to test, not an answer to obey. It’s an
emergent skill, and a critical one. The future of expertise will
depend not just on how good our tools are but on how well we think
alongside them.

But collaboration presupposes competence. A centaur goes in circles if
the human half doesn’t know what it’s doing. That’s where the
panic over pedagogy comes in. You can’t become de-skilled if you
were never skilled in the first place. And how do you inculcate basic
competencies in an age when the world’s best homework machine
snuggles into every student’s pocket?

Those of us who teach have a lot of homework of our own to do. Our old
standbys need a rebuild; in the past couple of years, too many college
kids have, in an unsettling phrase, ended up “majoring in
ChatGPT.” Yet it’s too soon to pronounce confidently what the
overall pedagogical effect of AI will be. Yes, AI can dull some edges.
Used well, it can also sharpen them.

Consider a recent randomized trial in a large Harvard physics course.
Half of the students learned two lessons in the traditional “best
practice” mode: an active, hands-on class led by a skilled
instructor. The other half used a custom-built AI tutor. Then they
switched. In both rounds, the AI-tutored students came out ahead—by
a lot. They didn’t just learn more. They worked faster, too, and
reported feeling more motivated and engaged. The system had been
designed to behave like a good coach: showing you how to break big
problems into smaller ones, offering hints instead of blurting out
answers, titrating feedback and adjusting to each student’s pace.

That’s what made the old-style tutorial system powerful: attention.
I remember my first weeks at Cambridge University, sitting one-on-one
with my biochemistry tutor. When I said, “I sort of get it,” he
pressed until we were both sure that I did. That targeted focus was
the essence of a Cambridge supervision. If custom-fitted in the right
way, large language models promise to mass-produce that kind of
attention—not the cardigan, not the burnished briar, not the pensive
moue, but the steady, responsive pressure that turns confusion into
competence.

Machines won’t replace mentors. What they promise to do is handle
the routine parts of tutoring—checking algebra, drilling lemmas,
reminding students to write the units, and making sure they grasp how
membrane channels work. This, in theory, can free the teacher to focus
on other things that matter: explaining the big ideas, pushing for
elegance, talking about careers, noticing when a student is burning
out.

That’s the upbeat scenario, anyway. We should be cautious about
generalizing from one study. (A study
[[link removed]] of
Turkish high-school students found no real gains from the use of a
tutor bot.) And we should be mindful that those physics students put
their tutor bot to good use because they had in-class exams to
face—a proctor, a stopwatch, a grader’s cold eye.

We should also be mindful that what works for STEM courses may not
work for the humanities. The term paper, for all its tedium, teaches a
discipline that’s hard to reproduce in conversation: building an
argument step by step, weighing evidence, organizing material, honing
a voice. Some of us who teach undergrads have started telling
ambitious students that if they write a paper, we’ll read and
discuss it with them, but it won’t count toward their grade.
That’s a salve, not a solution. In a curious cultural rewind,
orality may have to carry more of the load. Will Socrates,
dialogue’s great defender, have the last word after all?

Erosive de-skilling remains a prospect that can’t be wished away:
the steady atrophy of basic cognitive or perceptual capacities through
overreliance on tools, with no compensating gain. Such deficits can
deplete a system’s reserves—abilities you seldom need but must
have when things go wrong. Without them, resilience falters and
fragility creeps in. Think of the airline pilot who spends thousands
of hours supervising the autopilot but freezes when the system does.
Some automation theorists distinguish between “humans in the
loop,” who stay actively engaged, and “humans _on_ the loop,”
who merely sign off after a machine has done the work. The second,
poorly managed, produces what the industrial psychologist Lisanne
Bainbridge long ago warned of: role confusion, diminished awareness,
fading readiness. Like a lifeguard who spends most days watching
capable swimmers in calm water, such human supervisors rarely need to
act—but when they do, they must act fast, and deftly.

The same dynamic shadows office work of every kind. When lawyers,
project managers, and analysts spend months approving what the system
has already drafted or inferred, they become “on the loop,” and
out of practice. It’s the paradox of partial automation—the better
the system performs, the less people have to stay sharp, and the less
prepared they are for the rare moments when performance fails. The
remedy probably lies in institutional design. For example, a workplace
could stage regular drills—akin to a pilot’s recurrent
flight-simulator training—in which people must challenge the machine
and ensure that their capacities for genuine judgment haven’t
decayed in the long stretches of smooth flight.

Reserve skills, in many cases, don’t need to be universal; they just
need to exist somewhere in the system, such as those elm experts.
That’s why the Naval Academy, alarmed by the prospect of GPS
jamming, brought back basic celestial-navigation training after years
of neglect. Most sailors will never touch a sextant on the high seas,
but if a few of them acquire proficiency, they may be enough to steady
a fleet if the satellites go dark. The goal is to ensure that at least
some embodied competence survives, so that when a system stumbles, the
human can still stand—or at least stay afloat.

The most troubling prospect of all is what might be
called _constitutive_ de-skilling: the erosion of the capacities
that make us human in the first place. Judgment, imagination, empathy,
the feel for meaning and proportion—these aren’t backups;
they’re daily practices. If, in Jean-Paul Sartre’s fearful
formulation, we were to become “the machine’s machine,” the loss
would show up in the texture of ordinary life. What might vanish is
the tacit, embodied knowledge that underwrites our everyday
discernment. If people were to learn to frame questions the way the
system prefers them, to choose from its menu of plausible replies, the
damage wouldn’t take the form of spectacular failures of judgment so
much as a gradual attenuation of our character: shallower
conversation, a reduced appetite for ambiguity, a drift toward
automatic phrasing where once we would have searched for the right
word, the quiet substitution of fluency for understanding. To offload
those faculties would be, in effect, to offload ourselves. Losing them
wouldn’t simply change how we work; it would change who we are.

Most forms of de-skilling, if you take the long view, are benign. Some
skills became obsolete because the infrastructure that sustained them
also had. Telegraphy required fluency in dots and dashes; linotype, a
deft hand at a molten-metal keyboard; flatbed film editing, the touch
of a grease pencil and splicing tape, plus a mental map of where
scenes lived across reels and soundtracks. When the telegraph lines,
hot-metal presses, and celluloid reels disappeared, so did the crafts
they supported.

Another kind of de-skilling represents the elimination of drudgery.
Few of us mourn the loss of hand-scrubbing laundry, or grinding
through long division on paper. A neuroscientist I know swears by LLMs
for speeding the boilerplate-heavy business of drafting grant
proposals. He’s still responsible for the content, but if his
grant-writing chops decline, he’s unbothered. That’s not science,
in his view; it’s a performance demanded by the research economy.
Offloading some of it gives him back time for discovery.

Occupational de-skilling can, in fact, be democratizing, widening the
circle of who gets to do a job. For scientists who struggle with
English, chatbots can smooth the drafting of
institutional-review-board statements, clearing a linguistic hurdle
that has little to do with the quality of their research. De-skilling
here broadens access. Or think of Sennett’s bakery and the Greek men
who used to staff the kitchen floor. The ovens burned their arms, the
old-fashioned dough beaters pulled their muscles, and heavy trays of
loaves strained their back. By the ’90s, when the system ran on a
Windows controller, the workforce looked different: A multiethnic mix
of men and women stood at the screens, tapping icons. The craft had
shrunk; the eligible workforce had grown. (And yes, their labor had
grown cheaper: a wider gate, a lower wage.)

We often lose skills simply because tech lets us put our time to
better use and develop skills further up the proverbial value chain.
At one of Zuboff’s pulp mills, operators who were freed from manual
activity could spend more time anticipating and forestalling problems.
“Sitting in this room and just thinking has become part of my
job,” one said. Zuboff called this _reskilling_: action skills
giving way to abstraction and procedural reasoning, or what she termed
“intellective skills.” Something similar happened with accountants
after the arrival of spreadsheet programs such as VisiCalc; no longer
tasked with totting up columns of numbers, they could spend more time
on tax strategy and risk analysis.

More radical, new technologies can summon new skills into being.
Before the microscope, there were naturalists but no microscopists:
Robert Hooke and Antonie van Leeuwenhoek had to invent the practice of
seeing and interpreting the invisible. Filmmaking didn’t merely
borrow from theater; it brought forth cinematographers and editors
whose crafts had no real precedent. Each leap enlarged the field of
the possible. The same may prove true now. Working with large language
models, my younger colleagues insist, is already teaching a new kind
of craftsmanship—prompting, probing, catching bias and
hallucination, and, yes, learning to think in tandem with the machine.
These are emergent skills, born of entanglement with a digital
architecture that isn’t going anywhere. Important technologies, by
their nature, will usher forth crafts and callings we don’t yet have
names for.

The hard part is deciding, without nostalgia and inertia, which skills
are keepers and which are castoffs. None of us likes to see hard-won
abilities discarded as obsolete, which is why we have to resist the
tug of sentimentality. Every advance has cost something. Literacy
dulled feats of memory but created new powers of analysis. Calculators
did a number on mental arithmetic; they also enabled more people to
“do the math.” Recorded sound weakened everyday musical competence
but changed how we listen. And today? Surely we have some say in
whether LLMs expand our minds or shrink them.

Throughout human history, our capabilities have never stayed put.
Know-how has always flowed outward—from hand to tool to system.
Individual acumen has diffused into collective, coordinated
intelligence, propelled by our age-old habit of externalizing thought:
stowing memory in marks, logic in machines, judgment in institutions,
and, lately, prediction in algorithms. The specialization that once
produced guilds now produces research consortia; what once passed
among masters and apprentices now circulates through networks and
digital matrices. Generative AI—a statistical condensation of human
knowledge—is simply the latest chapter in our long apprenticeship to
our own inventions.

The most pressing question, then, is how to keep our agency intact:
how to remain the authors of the systems that are now poised to take
on so much of our thinking. Each generation has had to learn how to
work with its newly acquired cognitive prostheses, whether stylus,
scroll, or smartphone. What’s new is the speed and intimacy of the
exchange: tools that learn from us as we learn from them. Stewardship
now means ensuring that the capacities in which our humanity
resides—judgment, imagination, understanding—stay alive in us. If
there’s one skill we can’t afford to lose, it’s the skill of
knowing which of them matter.

About the Author

_Kwame Anthony Appiah
[[link removed]] is a
professor of philosophy and law at New York University and the author
of Captive Gods: Religion and the Rise of Social Science._

_Let the best of The Atlantic come to you. Select from our free
newsletters [[link removed]], and get
Atlantic insight straight to your inbox. Get unlimited access to The
Atlantic on all of your devices with a digital subscription
[[link removed]]._

* artificial intelligence
[[link removed]]
* mental skills
[[link removed]]
* skilled labor
[[link removed]]
* Large language models
[[link removed]]

*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

 

 

 

INTERPRET THE WORLD AND CHANGE IT

 

 

Submit via web
[[link removed]]

Submit via email
Frequently asked questions
[[link removed]]
Manage subscription
[[link removed]]
Visit xxxxxx.org
[[link removed]]

Twitter [[link removed]]

Facebook [[link removed]]

 




[link removed]

To unsubscribe, click the following link:
[link removed]
Screenshot of the email generated on import

Message Analysis