Springe direkt zu Inhalt

Brain-constrained neural network shows how words enable learning of abstract concepts

You can pet a dog, see the sun and use a hammer, but you cannot see democracy or touch peace. Despite this, you can identify an action as democratic, just like you can identify a poodle as a type of dog. How does your brain learn these different concrete and abstract concepts? And does having a word for a concept help with learning it? In a recent publication from the Brain Language Laboratory at Freie Universität Berlin, a brain-constrained neural network was used to investigate the neuronal mechanisms behind these processes.

News vom 22.05.2024

Concepts are built from the ground up

In real life, concepts are usually acquired over time, by encountering many different exemplars of the concept. For example, dogs come in many shapes and sizes – like poodles, huskies and chihuahuas – but they all share some features that allow us to identify them as dogs. We can think about abstract concepts in the same way: exemplars of democracy might be raising your hand to to make your opinion known in a poll, casting your vote on a specific issue in a direct democracy or electing a representative in a representative democracy. These exemplars are different to dog breeds: they do not all share features that allow us to identify them as democratic, even though subsets of them might share some features. For example, the motor actions behind casting your vote in a direct and a representational democracy are the same, but what you vote on is different. Likewise, raising your hand in the poll and casting your vote in a direct democracy are different actions, but what your vote expresses is the same. Still, you can identify all these actions as democratic.

The brain-constrained neural network was trained to learn concepts following these assumptions. When simulating the experience of exemplars of concepts, overlapping sets of neurons were always activated for concrete concepts, while only partially overlapping set of neurons were activated for each exemplar of an abstract concept. Learning was implemented in a biologically realistic way: neurons that fired at the same time strengthened their synaptic connection, whereas the connections between neurons that fired asynchronously weakened.

Different layers of the network represented different areas of the human cortex. This allowed for modelling visual stimuli, hand-motor actions as well as auditory stimuli and articulatory actions. With this design it was possible to either only present the network with visual and motor exemplars of concepts, or to additionally provide the same spoken word for all exemplars of the same concept.

Words are necessary to learn abstract concepts

Model activity after training shows the following results: when the network learned to associate exemplars of concepts with a word, the resulting neuronal circuits were larger and remained in working memory for longer. This effect was especially strong for abstract concepts. Without an associated word, the neural network learned neuronal circuits that were weakly connected and did not show maintenance of activity for some time – the network correlate of working memory. Only when verbal correlates of an abstract concept were learned, the emerging representation showed significant working memory activity. These results indicate that concrete concepts can be learned through interaction with the world, but abstract concepts require words to bind them together.

Verbal Symbols Support Concrete but Enable Abstract Concept Formation: Evidence From Brain-Constrained Deep Neural Networks
By Fynn R. Dobler, Malte R. Henningsen-Schomers and Friedemann Pulvermüller
Language Learning, 2024

9 / 73