We are a group of researchers investigating human language, computation, and human cognition. We are based at Saarland University and Saarland Informatics Campus, Germany.
Our research studies the following topics:
Machine Learning and Language: How do neural language models comprehend language? What explains their success, and what are their limitations? For example, we have investigated the expressive power of transformer models (ACL 2024, TACL 2020) and probed the linguistic knowledge of language models (TACL 2019). More recently, we’ve looked at state space models and mechanistic interpretability.
Computational Cognition: How does the human mind process information, and how does this shape language? In recent work, we have augmented GPT-2 with memory limitations to examine what makes recursion difficult for humans (PNAS 2022a), propose that grammar reflects pressures towards efficient language use (PNAS 2020, PNAS 2022b), and have developed a unifying theory of biases in human perception across domains (Nature Neuroscience 2024).