We are a group of researchers investigating the intersection of natural language, computation, and human cognition. We are based at Saarland University and Saarland Informatics Campus, Germany.
Our research studies the following topics:
Natural Language Processing: How do neural language models comprehend language? What explains their success, and what are their limitations? For example, we have investigated the expressive power of transformer models to model recursive structure (TACL 2020) and probed the linguistic knowledge of language models (TACL 2019).
Computational Cognitive Science: How do humans understand language, and how does this process shape language? In recent work, we have augmented GPT-2 with memory limitations to examine what makes recursion difficult for humans (PNAS 2022a), and propose that grammar reflects pressures towards efficient language use (PNAS 2020, PNAS 2022b).