Language, Computation and Cognition Lab (LaCoCo)
Language, Computation and Cognition Lab (LaCoCo)
Home
People
Publications
Teaching
Joining
Light
Dark
Automatic
Michael Hahn
Latest
Lossy Context Surprisal Predicts Task-Dependent Patterns in Relative Clause Processing
A Formal Framework for Understanding Length Generalization in Transformers
InversionView: A General-Purpose Method for Reading Information from Neural Activations
Linguistic Structure from a Bottleneck on Sequential Information Processing
More frequent verbs are associated with more diverse valency frames: Efficient language design at the lexicon-grammar interface
Separations in the Representational Capabilities of Transformers and Recurrent Architectures
The Expressive Capacity of State Space Models: A Formal Language Perspective
Why are Sensitive Functions Hard for Transformers?
A unifying theory explains seemingly contradictory biases in perceptual estimation
A theory of emergent in-context learning as implicit structure induction
A Cross-Linguistic Pressure for Uniform Information Density in Word Order
Modeling task effects in human reading with neural network-based attention
A resource-rational model of human processing of recursive linguistic structure
Crosslinguistic word order variation reflects evolutionary pressures of dependency and information locality
Morpheme ordering across languages reflects optimization for processing efficiency
Explaining patterns of fusion in morphological paradigms using the memory--surprisal tradeoff
Modeling fixation behavior in reading with character-level neural attention
Information theory as a bridge between language function and language form
Modeling word and morpheme order in natural language as an efficient tradeoff of memory and surprisal
An Information-Theoretic Characterization of Morphological Fusion
Sensitivity as a complexity measure for sequence classification tasks
Universals of word order reflect optimization of grammars for efficient communication
RNNs can generate bounded hierarchical languages with optimal memory
Character-based surprisal as a model of human reading in the presence of errors
Estimating predictive rate-distortion curves via neural variational inference
Tabula nearly rasa: Probing the linguistic knowledge of character-level neural language models trained on unsegmented text
An information-theoretic explanation of adjective ordering preferences
Wreath Products of Distributive Forest Algebras
Modeling human reading with neural attention
Henkin Semantics for Reasoning with Natural Language
Visibly Counter Languages and the Structure of NC1
On deriving semantic representations from dependencies: A practical approach for evaluating meaning in learner corpora
Predication and NP Structure in an Omnipredicative Language: The Case of Khoekhoe
CoMeT: Integrating different levels of linguistic modeling for meaning assessment
Word Order Variation in Khoekhoe
Arabic Relativization Patterns: A Unified HPSG Analysis
Evaluating the Meaning of Answers to Reading Comprehension Questions: A Semantics-Based Approach
Null Conjuncts and Bound Pronouns in Arabic
On deriving semantic representations from dependencies: A practical approach for evaluating meaning in learner corpora
Cite
×