From xxxxxx <[email protected]>
Subject Sunday Science: For Some Patients, the ‘Inner Voice’ May Soon Be Audible
Date August 18, 2025 5:00 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[[link removed]]

SUNDAY SCIENCE: FOR SOME PATIENTS, THE ‘INNER VOICE’ MAY SOON BE
AUDIBLE  
[[link removed]]


 

Carl Zimmer
August 14, 2025
The New York Times
[[link removed]]


*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

_ In a recent study, scientists successfully decoded not only the
words people tried to say but the words they merely imagined saying. _


Casey Harrell, who has amyotrophic lateral sclerosis, is a volunteer
in the long-running clinical trial, BrainGate2, and uses a
brain-machine interface to hold conversations with his family and
friends., Ian C. Bates for The New York Times

 

For decades [[link removed]],
neuroengineers have dreamed of helping people who have been cut off
from the world of language.

A disease like amyotrophic lateral sclerosis, or A.L.S., weakens the
muscles in the airway. A stroke can kill neurons that normally relay
commands for speaking. Perhaps, by implanting electrodes, scientists
could instead record the brain’s electric activity and translate
that into spoken words.

Now a team of researchers has made an important advance toward that
goal. Previously they succeeded in decoding the signals produced when
people tried to speak. In the new study, published on Thursday in the
journal Cell
[[link removed](25)00681-6], their
computer often made correct guesses when the subjects simply imagined
saying words.

Christian Herff, a neuroscientist at Maastricht University in the
Netherlands who was not involved in the research, said the result went
beyond the merely technological and shed light on the mystery of
language. “It’s a fantastic advance,” Dr. Herff said.

The new study is the latest result in a long-running clinical trial,
called BrainGate2 [[link removed]],
that has already seen some remarkable successes. One
participant, Casey Harrell
[[link removed]],
now uses his brain-machine interface to hold conversations with his
family and friends.

In 2023, after A.L.S. had made his voice unintelligible, Mr. Harrell
agreed to have electrodes implanted in his brain. Surgeons placed four
arrays of tiny needles on the left side, in a patch of tissue called
the motor cortex. The region becomes active when the brain creates
commands for muscles to produce speech.

A computer recorded the electrical activity from the implants as Mr.
Harrell attempted to say different words. Over time, with the help of
artificial intelligence, the computer accurately predicted almost
6,000 words, with an accuracy of 97.5 percent. It could then
synthesize those words using Mr. Harrell’s voice, based on
recordings made before he developed A.L.S.

But successes like this one raised a troubling question: Could a
computer accidentally record more than patients actually wanted to
say? Could it eavesdrop on their inner voice?

“We wanted to investigate if there was a risk of the system decoding
words that weren’t meant to be said aloud,” said Erin Kunz, a
neuroscientist at Stanford University and an author of the new study.
She and her colleagues also wondered if patients might actually prefer
using inner speech. They noticed that Mr. Harrell and other
participants became fatigued when they tried to speak; could simply
imagining a sentence be easier for them, and allow the system to work
faster?

“If we could decode that, then that could bypass the physical
effort,” Dr. Kunz said. “It would be less tiring, so they could
use the system for longer.”

But it wasn’t clear if the researchers could actually decode inner
speech. In fact, scientists don’t even agree
[[link removed]] on
what “inner speech” is.

Mr. Harrell reads a screen connected to his brain-computer interface.
Neuroscientists now wonder if patients might prefer using “inner
speech” to attempting to speak out loud. Ian C. Bates for The New
York Times

Our brains produce language, picking out words and organizing them
into sentences, using a constellation of regions that, together, are
the size of a large strawberry
[[link removed]].

We can use the signals from the language network to issue commands to
our muscles to speak, or use sign language, or type a text message.
But many people also have the feeling that they use language to
perform the very act of thinking. After all, they can hear their
thoughts as an inner voice.

Some researchers have indeed argued that language is essential for
thought. But others, pointing to recent studies, maintain that much of
our thinking does not involve language at all, and that people who
hear an inner voice are just perceiving a kind of sporadic commentary
in their heads.

“Many people have no idea what you’re talking about when you say
you have an inner voice,” said Evelina Fedorenko, a cognitive
neuroscientist at M.I.T. “They’re like, ‘You know, maybe you
should go see a doctor if you’re hearing words in your head.’”
(Dr. Fedorenko said she has an inner voice, while her husband does
not.)

Dr. Kunz and her colleagues decided to investigate the mystery for
themselves. The scientists gave participants seven different words,
including “kite” and “day,” then compared the brain signals
when participants attempted to say the words and when they only
imagined saying them.

As it turned out, imagining a word produced a pattern of activity
similar to that of trying to say it, but the signal was weaker. The
computer did a pretty good job of predicting which of the seven words
the participants were thinking. For Mr. Harrell, it didn’t do much
better than a random guesses would have, but for another participant
it picked the right word more than 70 percent of the time.

The researchers put the computer through more training, this time
specifically on inner speech. Its performance improved significantly,
including on Mr. Harrell. Now when the participants imagined saying
entire sentences, such as “I don’t know how long you’ve been
here,” the computer could accurately decode most or all of the
words.

Dr. Herff, who has done his own studies on inner speech, was surprised
that the experiment succeeded. Before, he would have said that inner
speech is fundamentally different from the motor cortex signals that
produce actual speech. “But in this study, they show that, for some
people, it really isn’t that different,” he said.

A volunteer looks at a sentence on a monitor and then imagines saying
it. The computer decodes her brain signals and displays the result
below the colored square. Emory BrainGate Team

Dr. Kunz emphasized that the computer’s current performance
involving inner speech would not be good enough to let people hold
conversations. “The results are an initial proof of concept more
than anything,” she said.

But she is optimistic that decoding inner speech could become the new
standard for brain-computer interfaces. In more recent trials, the
results of which have yet to be published, she and her colleagues have
improved the computer’s accuracy and speed. “We haven’t hit the
ceiling yet,” she said.

As for mental privacy, Dr. Kunz and her colleagues found some reason
for concern: On occasion, the researchers were able to detect words
that the participants weren’t imagining out loud.

In one trial, the participants were shown a screen full of 100 pink
and green rectangles and circles. They then had to determine the
number of shapes of one particular color — green circles, for
instance. As the participants worked on the problem, the computer
sometimes decoded the word for a number. In effect, the participants
were silently counting the shapes, and the computer was hearing them.

“These experiments are the most exciting to me,” Dr. Herff said,
because they suggest that language may play a role in many different
forms of thought beyond just communicating. “Some people really seem
to think this way,” he said.

Dr. Kunz and her colleagues explored ways to prevent the computer from
eavesdropping on private thoughts. They came up with two possible
solutions.

One would be to only decode attempted speech, while blocking inner
speech. The new study suggests this strategy could work. Even though
the two kinds of thought are similar, they are different enough that a
computer can learn to tell them apart. In one trial, the participants
mixed sentences in their minds of both attempted and imagined speech.
The computer was able to ignore the imagined speech.

For people who would prefer to communicate with inner speech, Dr. Kunz
and her colleagues came up with a second strategy: an inner password
to turn the decoding on and off. The password would have to be a long,
unusual phrase, they decided, so they chose “Chitty Chitty Bang
Bang,” the name of a 1964 novel by Ian Fleming as well as a 1968
movie starring Dick van Dyke.

For people who might prefer to use inner speech to talk, Dr. Kunz and
her colleagues came up with a strategy to turn computer decoding on
and off with an internal password. They chose “Chitty Chitty Bang
Bang,” the name of the 1964 novel by Ian Fleming and 1968 movie
starring Dick van Dyke. Silver Screen Collection/Getty Images

 One of the participants, a 68-year-old woman with A.L.S., imagined
saying “Chitty Chitty Bang Bang” along with an assortment of other
words. The computer eventually learned to recognize the password with
a 98.75 percent accuracy — and decoded her inner speech only after
detecting the password.

“This study represents a step in the right direction, ethically
speaking,” said Cohen Marcus Lionel Brown, a bioethicist at the
University of Wollongong in Australia. “If implemented faithfully,
it would give patients even greater power to decide what information
they share and when.”

Dr. Fedorenko, who was not involved in the new study, called it a
“methodological tour de force.” But she questioned whether an
implant could eavesdrop on many of our thoughts. Unlike Dr. Herff, she
doesn’t see a role for language in much of our thinking.

Although the BrainGate2 computer successfully decoded words that
patients consciously imagined saying, Dr. Fedorenko noted, it
performed much worse when people responded to open-ended commands. For
example, in some trials, the participants were asked to think about
their favorite hobby when they were children.

“What they’re recording is mostly garbage,” she said. “I think
a lot of spontaneous thought is just not well-formed linguistic
sentences.”

CARL ZIMMER writes the “Origins” column for The New York Times. He
has written about neuroengineering since 1993.

_Subscribe to the NEW YORK TIMES
[[link removed]]_

*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

 

 

 

INTERPRET THE WORLD AND CHANGE IT

 

 

Submit via web
[[link removed]]

Submit via email
Frequently asked questions
[[link removed]]
Manage subscription
[[link removed]]
Visit xxxxxx.org
[[link removed]]

Twitter [[link removed]]

Facebook [[link removed]]

 




[link removed]

To unsubscribe, click the following link:
[link removed]
Screenshot of the email generated on import

Message Analysis