New research project “Sequence Computations in Humans, Models and Machines (CompSeq)” approved
CompSeq
How do biological and artificial systems process temporal sequences—such as language or action sequences—and what can they learn from each other? These questions are being addressed by the newly approved pilot project Sequence Computations in Humans, Models and Machines (CompSeq), which operates at the intersection of neuroscience, artificial intelligence (AI), and computational neuroscience.
At its core, the project compares modern AI architectures—such as Transformers, recurrent neural networks, and state-space models—with biological networks in the human and animal brain. The aim is to better understand the functional principles of sequence processing and to use this knowledge to develop AI models that are more efficient, resource-saving, and biologically inspired.
While Transformer models currently represent the most powerful systems in AI, they are also highly demanding in terms of computation and memory. Alternatives that are more closely aligned with biological processes require fewer resources. CompSeq brings together expertise from neuroscience and AI research to investigate, in comparative experiments involving humans, animals, and AI systems, how mechanisms such as memory, generalization, and one-shot learning arise and how they can be improved.
In the long term, the project aims to develop new pilot architectures for AI systems that combine high performance with low energy consumption, thereby contributing to a more comprehensible, sustainable, and secure advancement of artificial intelligence.
Project leads:
Prof. Dr. Thomas Brox, Prof. Dr. Ilka Diester, Prof. Dr. Christian Leibold, Prof. Dr. Monika Schönauer