lacoco-lab

ELLIS Pre-NeurIPS Session at Saarbrücken, 2025

Information

WHEN: November 26, 2025, 13:00-16:00

WHERE: Building C7.4 at Saarland University Campus, Conference Room 1.17

This event is part of ELLIS Pre-NeurIPS Fest 2025, associated with our ELLIS unit.

Program

The following posters have been confirmed so far. If you want to present your work, but haven’t confirmed your participation so far, please let the organizers know at your earliest convenience.

Main Conference Presentations

Important note to presenters: Poster boards have width 118.5 x height 146 cm. If your poster is larger than A0, please confirm space availability with the organizers asap.

Title Authors
Quantum-inspired Multi-dimensional Visual Fields with Learnable Energy Representations Shuteng Wang, Christian Theobalt, Vladislav Golyanik
Attention (as Discrete-Time Markov) Chains Yotam Erel, Olaf Dünkel, Rishabh Dabral, Vladisav Golyanik, Christian Theobalt, Amit Bermano
FaCT: Faithful Concept Traces for Explaining Neural Network Decisions Amin Parchami-Araghi, Sukrut Rao, Jonas Fischer, Bernt Schiele
MIBP-Cert: Certified Training against Data Perturbations with Mixed Integer Bilinear Programs Tobias Lorenz, Marta Kwiatkowska, Mario Fritz
Pay Attention to Small Weights Chao Zhou, Advait Gadhikar, Tom Jacobs, Rebekka Burkholz
The Graphon Limit Hypothesis: Understanding Neural Network Pruning via Infinite Width Analysis Hoang Pham, The Anh Ta, Tom Jacobs, Rebekka Burkholz, Long Tran-Thanh
BitMark for Infinity: Watermarking Bitwise Autoregressive Image Generative Models Louis Kerner, Michel Meintz, Bihe Zhao, Franziska Boenisch, Adam Dziedzic
Neural Rule Lists: Learning Discretizations, Rules, and Order in One Go Sascha Xu, Nils Philipp Walter, Jilles Vreeken
Finding and Reactivating Post-Trained LLMs’ Hidden Safety Mechanisms Mingjie Li, Wai Man Si, Michael Backes, Yang Zhang, Yisen Wang
Adjacent Words, Divergent Intents: Jailbreaking Large Language Models via Task Concurrency Yukun Jiang, Mingjie Li, Michael Backes, Yang Zhang
Large Language Models as Model Organisms for Human Associative Learning Camila Kolling, Vy Vo, Mariya Toneva
Brain-tuning Improves Generalizability and Efficiency of Brain Alignment in Speech Models Omer Moussa, Mariya Toneva
Born a Transformer – Always a Transformer? On the Effect of Pretraining on Architectural Abilities Mayank Jobanputra, Yana Veitsman, Yash Sarrof, Aleksandra Bakalova, Vera Demberg, Ellie Pavlick, Michael Hahn
GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs Advik Basani, Xiao Zhang
Sign-In to the Lottery: Reparameterizing Sparse Training Advait Gadhikar, Tom Jacobs, Chao Zhou, Rebekka Burkholz
Curriculum Design for Trajectory-Constrained Agent: Compressing Chain-of-Thought Tokens in LLMs Georgios Tzannetos, Parameswaran Kamalaruban, Adish Singla
Faster Generic Identification in Tree-Shaped Structural Causal Models Yasmine Briefs, Markus Bläser
Post Hoc Regression Refinement via Pairwise Rankings Kevin Tirta Wijaya, Michael Sun, Minghao Guo, Hans-peter Seidel, Wojciech Matusik, Vahid Babaei
How Many Tokens Do 3D Point Cloud Transformer Architectures Really Need? Tuan Anh Tran, Duy Minh Ho Nguyen, Hoai-Chau Tran, Michael Barz, Khoa D Doan, Roger Wattenhofer, Vien Anh Ngo, Mathias Niepert, Daniel Sonntag, Paul Swoboda
MaxSup: Overcoming Representation Collapse in Label Smoothing Yuxuan Zhou, Heng Li, Zhi-Qi Cheng, Xudong Yan, Yifei Dong, Mario Fritz, Margret Keuper

Workshop Presentations

Important note to presenters: We’re expecting workshop posters to not exceed the size specified for the San Diego Workshops at the conference website (24W x 36H inches).

Title Authors Workshop
Fixed Aggregation Features Can Rival GNNs Celia Rubio-Madrigal, Rebekka Burkholz Women in Machine Learning
AuditCopilot: Leveraging LLMs for Fraud Detection in Double-Entry Bookkeeping Md. Abdul Kadir et al Generative AI in Finance
On Riemannian Gradient Descent Algorithm using gradient averaging Saugata Purkayashta, Sukannya Purkayashta Optimization in Machine Learning

Organizing Committee