We are a group of researchers investigating human language, computation, and human cognition. We are based at Saarland University and Saarland Informatics Campus, Germany.
Our research studies the following topics:
Understanding machine learning models: We have investigated the expressive power and learning biases of transformer models (ACL 2024, TACL 2020) and state space models (NeurIPS 2024 ), and developed methods for mechanistic interpretability (NeurIPS 2024, TACL 2019).
Computational Cognition and Neuroscience: How does the human mind process information, and how does this shape language? In recent work, we have augmented GPT-2 with memory limitations to examine what makes recursion difficult for humans (PNAS 2022a), propose that grammar reflects pressures towards efficient language use (PNAS 2020, PNAS 2022b), and have developed a unifying theory of biases in human perception across domains (Nature Neuroscience 2024).