EPFL Pre-NeurIPS 2025 Regional Event

The EPFL ELLIS Lausanne Unit, hosted within the EPFL AI Center, is delighted to invite you to its 2025 pre-NeurIPS regional event on November 24, 2025 in auditorium C01.
This event is part of the ELLIS Pre-NeurIPS Fest 2025: Celebrate, Connect, Collaborate, in anticipation of the upcoming NeurIPS Main Conference, which will be held in San Diego, USA, from December 2nd to December 7th and Mexico City, Mexico, from November 30th to December 5th.
While the conference is highly selective, 55 EPFL papers have been accepted to this year conference. The list of NeurIPS 2025 accepted papers with at least one EPFL author is available down below.
Beyond celebrating EPFL’s NeurIPS contributions, this event aims to:
- Foster exchange within the EPFL ML/AI community, helping researchers discover what others are working on and encouraging new collaborations;
- Connect local and external researchers interested in machine learning through discussions and networking.
Therefore, we invite anyone who had an accepted contribution – papers, talks, posters, workshops..- at NeurIPS to apply for one of our talk* and/or poster slots*.
Researchers from all institutions (not only EPFL) are welcome to apply and/or attend the event. Please note that travel costs cannot be reimbursed.
*Posters of workshop papers are accepted and we welcome other published NeurIPS contributions (papers, talks, posters, workshops). Feature posters including but not exclusive to accepted NeurIPS submissions. Upon space, contributions to other major conferences can also be showcased.
Poster boards can accommodate normalized standard format A0 (841 × 1189 mm) and in portrait mode.
Program
14:00 – 15:00 – Check-in & poster setup
15:00 – 15:15 – Welcome
15:15 – 16:15 – Talks Session 1/2 – (4 x 10 + 5 mins talks)
-
With Limited Data for Multimodal Alignment, Let the STRUCTURE Guide You – by Shuo Wen & Fabian Gröger (MLBio)
-
Flat Channels to Infinity in Neural Loss Landscapes – by Flavio Martinelli (LCN)
-
FlashMD: Long-stride, Universal Prediction of Molecular Dynamics – by Filippo Bigi (COSMO)
-
Inference-time Adaptive Tokenization via Online Compression – by Saibo Geng (DLab)
16:15 – 16:30 – Break
16:30 – 17:30 – Talks Session 2/2 – (4 x 10 + 5 mins talks)
- Flow based approach for Dynamic Temporal Causal models with non-Gaussian or Heteroscedastic Noises – by Abdellah Rahmani (LTS4)
-
Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions – by Skander Moalla & Semen Matrenok (CLAIRE)
- The Nuclear Route: Sharp Asymptotics of ERM in Overparameterized Quadratic Networks – by Vittorio Erba (SPOC)
-
Which Algorithms Have Tight Generalization Bounds? – by Thomas Weinberger (LTHC)
17:30 – 19:00 – Poster session with apéro
Organizing Committee
Prof. Lenka Zdeborová, EPFL (Statistical Physics of Computation Lab) – ELLIS Fellow
Prof. Robert West, EPFL (Data Science Laboratory) – ELLIS Scholar
Prof. Volkan Cevher, EPFL (Information and Inference Systems Lab.) – ELLIS Fellow
Prof. Martin Schrimpf, EPFL (NeuroAI Lab) – ELLIS Scholar
Prof. Pascal Frossard, EPFL AI Center – ELLIS Fellow and Lausanne Unit Director
Coordination
Nicolas Machado, EPFL AI Center and ELLIS Lausanne Unit
Posters list
List updated on a rolling basis.
-
With Limited Data for Multimodal Alignment, Let the STRUCTURE Guide You – Fabian Gröger*, Shuo Wen*, Huyen Le, Maria Brbić
-
Flow based approach for Dynamic Temporal Causal models with non-Gaussian or Heteroscedastic Noises – Abdellah Rahmani, Pascal Frossard
-
Incremental Learning of Sparse Attention Patterns in Transformers – Oguz Kaan Yuksel, Rodrigo Alvarez Lucendo, Nicolas Flammarion
-
ReservoirTTA: Prolonged Test-time Adaptation for Evolving and Recurring Domain – Guillaume Vray∗, Devavrat Tomar∗, Xufeng Gao, Jean-Philippe Thiran, Evan Shelhamer, Behzad Bozorgtabar
- Double Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy – Rustem Islamov, Niccolò Ajroldi, Antonio Orvieto, Aurelien Lucchi
-
ObjexMT: Objective Extraction and Metacognitive Calibration for LLM‑as‑a‑Judge under Multi‑Turn Jailbreaks – Hyunjun Kim, Junwoo Ha, Sangyoon Yu, Haon Park
-
Context aware geometric deep learning for RNA sequence design – Parth Bibekar, Lucien F. Krapp, Matteo Dal Peraro
-
Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions – Simon Matrenok, Skander Moalla, Caglar Gulcehre
- Cross-dataset Multivariate Time-series Model for Parkinson’s Diagnosis via Keyboard Dynamics – Arianna Francesconi, Donato Cappetta, Fabio Rebecchi, Paolo Soda, Valerio Guarrasi, Rosa Sicilia
- Inference-Time Adaptive Tokenization via Online Compression – Saibo Geng, Nathan Ranchin, Yunzhen Yao, Maxime Peyrard, Chris Wendler, Michael Gastpar, Robert West
- Learning with Restricted Boltzmann Machines: Asymptotics of AMP and GD in High Dimensions – Yizhou Xu, Florent Krzakala, Lenka Zdeborová
- Flat Channels to Infinity in Neural Loss Landscapes – Flavio Martinelli, Alexander Van Meegen, Berfin Simsek, Wulfram Gerstner, Johanni Brea
- TokenSwap: A Lightweight Method to Disrupt Memorized Sequences in LLMs – Kaustubh Ponkshe, Parjanya Prashant, Babak Salimi
- Generating Directed Graphs with Dual Attention and Asymmetric Encoding – Alba Carballo-Castro, Manuel Madeira, Yiming Qin, Pascal Frossard
- FlashMD: long-stride, universal prediction of molecular dynamics – Filippo Bigi, Sanggyu Chong, Agustinus Kristiadi, Michele Ceriotti
- Ascent Fails to Forget – Ioannis Mavrothalassitis*, Pol Puigdemont*, Noam Itzhak Levi*, Volkan Cevher
- The Nuclear Route: Sharp Asymptotics of ERM in Overparameterized Quadratic Networks – Vittorio Erba, Emanuele Troiani, Lenka Zdeborová, Florent Krzakala
- Towards End-to-End Learning of Structure-based Protein Sequence Design – Julius Wenckstern, Bruno Correia
- Which Algorithms Have Tight Generalization Bounds? – Michael Gastpar, Ido Nachum, Jonathan Shafer, Thomas Weinberger
- RAT: Bridging RNN Efficiency and AttentionAccuracy via Chunk-based Sequence Modeling – Xiuying Wei, Anunay Yadav, Razvan Pascanu, Caglar Gulcehre
- Neuro-Spectral Architectures for Causal Physics-Informed Networks – Arthur Bizzi et al.
- Learning to Make Friends: Coaching LLM Agents toward Emergent Social Ties – Philipp J. Schneider, Lin Tian, Marian-Andrei Rizoiu
- GeRaF: Neural Geometry Reconstruction from Radio Frequency Signals – Jiachen Lu, Hailan Shanbhag, Haitham Al Hassanieh
- Bayes optimal learning of attention-indexed models – Fabrizio Boncoraglio, Emanuele Troiani, Vittorio Erba, Lenka Zdeborová
- Not All LLM-Generated Data Are Equal: Rethinking Data Weighting in Text Classification – Hsun-Yu Kuo, Yin-Hsiang Liao, Yu-Chieh Chao, Wei-Yun Ma, Pu-Jen Cheng
- Unbiased and Sign Compression in Distributed Learning: Comparing Noise Resilience via SDEs – Enea Monzio Compagnoni, Rustem Islamov, Frank Norbert Proske, Aurelien Lucchi
- Return of ChebNet: Understanding and Improving an Overlooked GNN on Long-Range Tasks – Ali Hariri, Álvaro Arroyo, Alessio Gravina, Moshe Eliasof, Carola-Bibiane Schönlieb, Davide Bacciu, Kamyar Azizzadenesheli, Xiaowen Dong, Pierre Vandergheynst
- AugGen: Synthetic Augmentation using Diffusion Models Can Improve Recognition – Parsa Rahimi, Damien Teney, Sebastien Marcel
- Robustness in Both Domains: CLIP Needs a Robust Text Encoder – Elias Abad Rocamora, Christian Schlarmann, Naman Deep Singh, Yongtao Wu, Matthias Hein, Volkan Cevher
- MEMOIR: Lifelong Model Editing with Minimal Overwrite and Informed Retention for LLMs – Ke Wang, Yiming Qin, Nikolaos Dimitriadis, Alessandro Favero, Pascal Frossard
Practical information
- Getting to EPFL
- Location/auditorium: CO1
- For any questions regarding the event you can contact [email protected]
