From xxxxxx <[email protected]>
Subject AIs Encode Language Like Brains Do
Date August 3, 2024 12:10 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[[link removed]]

AIS ENCODE LANGUAGE LIKE BRAINS DO  
[[link removed]]


 

Zaid Zada
August 2, 2024
The Conversation
[[link removed]]


*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

_ The similarity between neural networks and human brains opens a
window on human conversations. When two people converse, their brain
activity becomes coupled, transferring thoughts from the speaker to
the listener. _

,

 

Language enables people to transmit thoughts to each other because
each person’s brain responds similarly to the meaning of words. In
our newly published research
[[link removed](24)00460-4], my
colleagues [[link removed]] and I
[[link removed]] developed a framework to model the brain
activity of speakers as they engaged in face-to-face conversations.

We recorded the electrical activity of two people’s brains as they
engaged in unscripted conversations. Previous research has shown
[[link removed]] that when two people
converse, their brain activity becomes coupled, or aligned, and that
the degree of neural coupling is associated with better understanding
of the speaker’s message.

A neural code refers to particular patterns of brain activity
associated with distinct words in their contexts. We found that the
speakers’ brains are aligned on a shared neural code. Importantly,
the brain’s neural code resembled the artificial neural code of
large language models, or LLMs.

The neural patterns of words

A large language model is a machine learning program that can generate
text by predicting what words most likely follow others. Large
language models excel at learning the structure of language
[[link removed]], generating humanlike text
and holding conversations. They can even pass the Turing test
[[link removed]], making it difficult for
someone to discern whether they are interacting with a machine or a
human. Like humans, LLMs learn how to speak by reading or listening to
text produced by other humans
[[link removed]].

By giving the LLM a transcript of the conversation, we were able to
extract its “neural activations,” or how it translates words into
numbers, as it “reads” the script. Then, we correlated the
speaker’s brain activity with both the LLM’s activations and with
the listener’s brain activity. We found that the LLM’s activations
could predict the speaker and listener’s shared brain activity.

To be able to understand each other, people have a shared agreement on
the grammatical rules and the meaning of words in context
[[link removed]].
For instance, we know to use the past tense form of a verb to talk
about past actions, as in the sentence: “He visited the museum
yesterday.” Additionally, we intuitively understand that the same
word can have different meanings in different situations. For
instance, the word cold in the sentence “you are cold as ice” can
refer either to one’s body temperature or personality trait,
depending on the context. Due to the complexity and richness of
natural language, until the recent success of large language models,
we lacked a precise mathematical model to describe it.

Our study found that large language models can predict how linguistic
information is encoded in the human brain, providing a new tool to
interpret human brain activity. The similarity between the human
brain’s and the large language model’s linguistic code has enabled
us, for the first time, to track how information in the speaker’s
brain is encoded into words and transferred, word by word, to the
listener’s brain during face-to-face conversations. For example, we
found that brain activity associated with the meaning of a word
emerges in the speaker’s brain before articulating a word, and the
same activity rapidly reemerges in the listener’s brain after
hearing the word.

Powerful new tool

Our study has provided insights into the neural code for language
processing in the human brain and how both humans and machines can use
this code to communicate. We found that large language models were
better able to predict shared brain activity compared with different
features of language, such as syntax, or the order in which words
connect to form phrases and sentences. This is partly due to the
LLM’s ability to incorporate the contextual meaning of words, as
well as integrate multiple levels of the linguistic hierarchy into one
model: from words to sentences to conceptual meaning. This suggests
important similarities [[link removed]]
between the brain and artificial neural networks.

[Seen from above, three kids arrange big paper speech bubbles on the
ground.]
[[link removed]]

Language is a fundamental tool of communication. Malte Mueller/Royalty
Free via Getty Images
[[link removed]]

An important aspect of our research is using everyday recordings of
natural conversations to ensure that our findings capture the
brain’s processing in real life. This is called ecological validity
[[link removed]]. In contrast to
experiments in which participants are told what to say, we relinquish
control of the study and let the participants converse as naturally as
possible. This loss of control makes it difficult to analyze the data
because each conversation is unique and involves two interacting
individuals who are spontaneously speaking. Our ability to model
neural activity as people engage in everyday conversations attests to
the power of large language models.

Other dimensions

Now that we’ve developed a framework to assess the shared neural
code between brains during everyday conversations, we’re interested
in what factors drive or inhibit this coupling. For example, does
linguistic coupling increase if a listener better understands the
speaker’s intent? Or perhaps complex language, like jargon, may
reduce neural coupling.

Another factor that can influence linguistic coupling may be the
relationship between the speakers. For example, you may be able to
convey a lot of information with a few words to a good friend but not
to a stranger. Or you may be better neurally coupled to political
allies rather than rivals [[link removed]].
This is because differences in the way we use words across groups may
make it easier to align and be coupled with people within rather than
outside our social groups.[The Conversation]

_Zaid Zada [[link removed]],
Ph.D. Candidate in Psychology, Princeton University
[[link removed]]_

_This article is republished from The Conversation
[[link removed]] under a Creative Commons license. Read
the original article
[[link removed]]._

* Science
[[link removed]]
* artificial intelligence
[[link removed]]
* psychology
[[link removed]]
* language
[[link removed]]

*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

 

 

 

INTERPRET THE WORLD AND CHANGE IT

 

 

Submit via web
[[link removed]]

Submit via email
Frequently asked questions
[[link removed]]
Manage subscription
[[link removed]]
Visit xxxxxx.org
[[link removed]]

Twitter [[link removed]]

Facebook [[link removed]]

 




[link removed]

To unsubscribe, click the following link:
[link removed]
Screenshot of the email generated on import

Message Analysis

  • Sender: Portside
  • Political Party: n/a
  • Country: United States
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • L-Soft LISTSERV