From xxxxxx <[email protected]>
Subject A.I. Is Getting Better at Mind-Reading
Date May 20, 2023 12:50 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[In a recent experiment, researchers used large language models to
translate brain activity into words.]
[[link removed]]

A.I. IS GETTING BETTER AT MIND-READING  
[[link removed]]


 

Oliver Whang
May 1, 2023
New York Times
[[link removed]]

*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

_ In a recent experiment, researchers used large language models to
translate brain activity into words. _

Alex Huth (left), Shailee Jain (center) and Jerry Tang (right)
prepare to collect brain activity data in the Biomedical Imaging
Center at The University of Texas at Austin, Nolan Zunk/University of
Texas at Austin

 

Think of the words whirling around in your head: that tasteless joke
you wisely kept to yourself at dinner; your unvoiced impression of
your best friend’s new partner. Now imagine that someone could
listen in.

On Monday, scientists from the University of Texas, Austin, made
another step in that direction. In a study published in the journal
Nature Neuroscience
[[link removed]], the researchers
described an A.I. that could translate the private thoughts of human
subjects by analyzing fMRI scans, which measure the flow of blood to
different regions in the brain.

Already, researchers have developed language-decoding methods to pick
up the attempted speech
[[link removed]] of people who have lost
the ability to speak, and to allow paralyzed people to write
[[link removed]] while just
thinking of writing. But the new language decoder is one of the first
to not rely on implants. In the study, it was able to turn a
person’s imagined speech into actual speech and, when subjects were
shown silent films, it could generate relatively accurate descriptions
of what was happening onscreen.

“This isn’t just a language stimulus,” said Alexander Huth, a
neuroscientist at the university who helped lead the research.
“We’re getting at meaning, something about the idea of what’s
happening. And the fact that that’s possible is very exciting.”

The study centered on three participants, who came to Dr. Huth’s lab
for 16 hours over several days to listen to “The Moth” and other
narrative podcasts. As they listened, an fMRI scanner recorded the
blood oxygenation levels in parts of their brains. The researchers
then used a large language model to match patterns in the brain
activity to the words and phrases that the participants had heard.

Large language models like OpenAI’s GPT-4 and Google’s Bard are
trained on vast amounts of writing to predict the next word in a
sentence or phrase. In the process, the models create maps indicating
how words relate to one another. A few years ago, Dr. Huth noticed
[[link removed]] that
particular pieces of these maps — so-called context embeddings,
which capture the semantic features, or meanings, of phrases — could
be used to predict how the brain lights up in response to language.

In a basic sense, said Shinji Nishimoto, a neuroscientist at Osaka
University who was not involved in the research, “brain activity is
a kind of encrypted signal, and language models provide ways to
decipher it.”

In their study, Dr. Huth and his colleagues effectively reversed the
process, using another A.I. to translate the participant’s fMRI
images into words and phrases. The researchers tested the decoder by
having the participants listen to new recordings, then seeing how
closely the translation matched the actual transcript.

Almost every word was out of place in the decoded script, but the
meaning of the passage was regularly preserved. Essentially, the
decoders were paraphrasing.

ORIGINAL TRANSCRIPT: “I got up from the air mattress and pressed my
face against the glass of the bedroom window expecting to see eyes
staring back at me but instead only finding darkness.”

DECODED FROM BRAIN ACTIVITY: “I just continued to walk up to the
window and open the glass I stood on my toes and peered out I didn’t
see anything and looked up again I saw nothing.”

While under the fMRI scan, the participants were also asked to
silently imagine telling a story; afterward, they repeated the story
aloud, for reference. Here, too, the decoding model captured the gist
of the unspoken version.

PARTICIPANT’S VERSION: “Look for a message from my wife saying
that she had changed her mind and that she was coming back.”

DECODED VERSION: “To see her for some reason I thought she would
come to me and say she misses me.”

Finally the subjects watched a brief, silent animated movie, again
while undergoing an fMRI scan. By analyzing their brain activity, the
language model could decode a rough synopsis of what they were viewing
— maybe their internal description of what they were viewing.

The result suggests that the A.I. decoder was capturing not just words
but also meaning. “Language perception is an externally driven
process, while imagination is an active internal process,” Dr.
Nishimoto said. “And the authors showed that the brain uses common
representations across these processes.”

Greta Tuckute, a neuroscientist at the Massachusetts Institute of
Technology who was not involved in the research, said that was “the
high-level question.”

“Can we decode meaning from the brain?” she continued. “In some
ways they show that, yes, we can.”

This language-decoding method had limitations, Dr. Huth and his
colleagues noted. For one, fMRI scanners are bulky and expensive.
Moreover, training the model is a long, tedious process, and to be
effective it must be done on individuals. When the researchers tried
to use a decoder trained on one person to read the brain activity of
another, it failed, suggesting that every brain has unique ways of
representing meaning.

Participants were also able to shield their internal monologues,
throwing off the decoder by thinking of other things. A.I. might be
able to read our minds, but for now it will have to read them one at a
time, and with our permission.

_Oliver Whang
[[link removed]] is
a reporting fellow for The Times, focusing on science and
health. More about Oliver Whang
[[link removed]]_

_Science Times newsletter.
[[link removed]] Get stories that
capture the wonders of nature, the cosmos and the human body._

* artificial intelligence
[[link removed]]
* Computer science
[[link removed]]
* neuroscience
[[link removed]]

*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

 

 

 

INTERPRET THE WORLD AND CHANGE IT

 

 

Submit via web
[[link removed]]

Submit via email
Frequently asked questions
[[link removed]]

Manage subscription
[[link removed]]

Visit xxxxxx.org
[[link removed]]

Twitter [[link removed]]

Facebook [[link removed]]

 




[link removed]

To unsubscribe, click the following link:
[link removed]
Screenshot of the email generated on import

Message Analysis

  • Sender: Portside
  • Political Party: n/a
  • Country: United States
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • L-Soft LISTSERV