What chatGPT cannot do for you: Explanation of human symbol processing based on brain-constrained networks
News vom 19.07.2023
In a recent publication from the Brain Language Laboratory at Freie Universität Berlin, cutting-edge neural network research is critically reviewed in light of the question, how such models can help us understand the material brain-internal basis of cognitive and language processing. Powerful popular network models, such as autoencoders or generalized pretrained transformers, are successful in solving classification and completion problems, but do not address this issue. Their great dissimilarities, at multiple levels, with brain structure and function make their results difficult to relate to real neurobiological processes. A different strategy is brain-constrained neural modelling, whereby multi-level similarities between brain and network are implemented so that tentative conclusions on the material mechanistic basis of human cognition become possible. The just published article reviews recent advances in understanding the brain basis of symbol representation, meaning acquisition and verbal working memory along with neurobiologically founded models of different symbol types, including proper names, category terms, concrete concepts and abstract words. A general chapter contrasting the current main stream approach of big data neural networks research with a brain-based modelling strategy rounds up the paper.
Pulvermüller, F. (2023). Neurobiological mechanisms for language, symbols and concepts: Clues from brain-constrained deep neural networks. Progress in Neurobiology, 230:102511. doi: 10.1016/j.pneurobio.2023.102511..
Website of the ERC Advanced Grant project MatCo (Material Constraints enabling human cognition).