More Than Words: Using AI to Map How the Brain Understands Sentences | Nutrition Fit

0
375

[ad_1]

Summary: Combining neuroimaging data with artificial intelligence technology, researchers have identified a complex network within the brain that comprehends the meaning of spoken sentences.

Source: University of Rochester Medical Center

Have you ever wondered why you are able to hear a sentence and understand its meaning – given that the same words in a different order would have an entirely different meaning?

New research involving neuroimaging and A.I., describes the complex network within the brain that comprehends the meaning of a spoken sentence.

“It has been unclear whether the integration of this meaning is represented in a particular site in the brain, such as the anterior temporal lobes, or reflects a more network level operation that engages multiple brain regions,” said Andrew Anderson, Ph.D., research assistant professor in the University of Rochester Del Monte Institute for Neuroscience and lead author on of the study which was published in the Journal of Neuroscience.

“The meaning of a sentence is more than the sum of its parts. Take a very simple example – ‘the car ran over the cat’ and ‘the cat ran over the car’ – each sentence has exactly the same words, but those words have a totally different meaning when reordered.”

The study is an example of how the application of artificial neural networks, or A.I., are enabling researchers to unlock the extremely complex signaling in the brain that underlies functions such as processing language. The researchers gather brain activity data from study participants who read sentences while undergoing fMRI. These scans showed activity in the brain spanning across a network of different regions – anterior and posterior temporal lobes, inferior parietal cortex, and inferior frontal cortex.

This is a computerized drawing of a head
The study is an example of how the application of artificial neural networks, or A.I., are enabling researchers to unlock the extremely complex signaling in the brain that underlies functions such as processing language. Image is in the public domain

Using the computational model InferSent – an A.I. model developed by Facebook trained to produce unified semantic representations of sentences – the researchers were able to predict patterns of fMRI activity reflecting the encoding of sentence meaning across those brain regions.

“It’s the first time that we’ve applied this model to predict brain activity within these regions, and that provides new evidence that contextualized semantic representations are encoded throughout a distributed language network, rather than at a single site in the brain.”

Anderson and his team believe the findings could be helpful in understanding clinical conditions. “We’re deploying similar methods to try to understand how language comprehension breaks down in early Alzheimer’s disease. We are also interested in moving the models forward to predict brain activity elicited as language is produced. The current study had people read sentences, in the future we’re interested in moving forward to predict brain activity as people might speak sentences.”

Additional co-authors include Edmund Lalor, Ph.D., Rajeev Raizada, Ph.D., and Scott Grimm, Ph.D., with the University of Rochester, Douwe Kiela with Facebook A.I. Research, and Jeffrey Binder, M.D., Leonardo Fernandino, Ph.D., Colin Humphries, Ph.D., and Lisa Conant, Ph.D. with the Medical College of Wisconsin.

Funding: The research was supported with funding from the Del Monte Institute for Neuroscience’s Schimtt Program on Integrative Neuroscience and the Intelligence Advanced Research Projects Activity.

About this AI research news

Source: University of Rochester Medical Center
Contact: Kelsie Smith Hayduk – University of Rochester Medical Center
Image: The image is in the public domain

Original Research: Closed access.
Deep artificial neural networks reveal a distributed cortical network encoding propositional sentence-level meaning” by Andrew James Anderson, Douwe Kiela, Jeffrey R. Binder, Leonardo Fernandino, Colin J. Humphries, Lisa L. Conant, Rajeev D. S. Raizada, Scott Grimm and Edmund C. Lalor. Journal of Neuroscience


Abstract

Deep artificial neural networks reveal a distributed cortical network encoding propositional sentence-level meaning

Understanding how and where in the brain sentence-level meaning is constructed from words presents a major scientific challenge. Recent advances have begun to explain brain activation elicited by sentences using vector models of word meaning derived from patterns of word co-occurrence in text corpora. These studies have helped map out semantic representation across a distributed brain network spanning temporal, parietal and frontal cortex.

However, it remains unclear whether activation patterns within regions reflect unified representations of sentence-level meaning, as opposed to superpositions of context-independent component words. This is because models have typically represented sentences as “bags-of-words” that neglect sentence-level structure.

See also

This is a drawing od a rat with the effects of neuroplasticity and alcohol highlighted on different body parts

To address this issue, we interrogated fMRI activation elicited as 240 sentences were read by 14 participants (9F, 5M), using sentences encoded by a recurrent deep artificial neural-network trained on a sentence inference task (InferSent).

Recurrent connections and non-linear filters enable InferSent to transform sequences of word vectors into unified “propositional” sentence representations suitable for evaluating inter-sentence entailment relations. Using voxel-wise encoding modeling, we demonstrate that InferSent predicts elements of fMRI activation that cannot be predicted by bag-of-words models and sentence models using grammatical rules to assemble word vectors. This effect occurs throughout a distributed network, which suggests that propositional sentence-level meaning is represented within and across multiple cortical regions rather than at any single site.

In follow up analyses we place results in the context of other deep network approaches (ELMo and BERT) and estimate the degree of unpredicted neural signal using an “experiential” semantic model and cross-participant encoding.

Significance Statement

A modern-day scientific challenge is to understand how the human brain transforms word sequences into representations of sentence meaning.

A recent approach, emerging from advances in functional neuroimaging, big data and machine learning is to computationally model meaning, and use models to predict brain activity.

Such models have helped map a cortical semantic information-processing network. However, how unified sentence-level information – as opposed to word-level units – is represented throughout this network remains unclear. This is because models have typically represented sentences as unordered “bags-of-words”.

Using a deep artificial neural network that recurrently and non-linearly combines word representations into unified propositional sentence representations, we provide evidence that sentence-level information is encoded throughout a cortical network, rather than in a single region.

[ad_2]

Source link