EPFL Pre-NeurIPS 2025 Regional Event

A pre-event on machine learning on November 24, 2025 – Auditorium CO1

The EPFL ELLIS Lausanne Unit, hosted within the EPFL AI Center, is delighted to invite you to its 2025 pre-NeurIPS regional event on November 24, 2025 in auditorium C01.

This event is part of the ELLIS Pre-NeurIPS Fest 2025: Celebrate, Connect, Collaborate, in anticipation of the upcoming NeurIPS Main Conference, which will be held in San Diego, USA, from December 2nd to December 7th and Mexico City, Mexico, from November 30th to December 5th.

While the conference is highly selective, 55 EPFL papers have been accepted to this year conference. The list of NeurIPS 2025 accepted papers with at least one EPFL author is available down below.

Beyond celebrating EPFL’s NeurIPS contributions, this event aims to:

  • Foster exchange within the EPFL ML/AI community, helping researchers discover what others are working on and encouraging new collaborations;
  • Connect local and external researchers interested in machine learning through discussions and networking.

Therefore, we invite anyone who had an accepted contribution – papers, talks, posters, workshops..- at NeurIPS to apply for one of our talk* and/or poster slots*

Researchers from all institutions (not only EPFL) are welcome to apply and/or attend the event. Please note that travel costs cannot be reimbursed.

*Posters of workshop papers are accepted and we welcome other published NeurIPS contributions (papers, talks, posters, workshops). Feature posters including but not exclusive to accepted NeurIPS submissions. Upon space, contributions to other major conferences can also be showcased.

Poster boards can accommodate normalized standard format A0 (841 × 1189 mm) and in portrait mode.

Program

14:00 – 15:00 – Check-in & poster setup

15:00 – 15:15Welcome

15:15 – 16:15Talks Session 1/2 – (4 x 10 + 5 mins talks)

  • With Limited Data for Multimodal Alignment, Let the STRUCTURE Guide You by Shuo Wen & Fabian Gröger (MLBio)

  • Flat Channels to Infinity in Neural Loss Landscapes – by Flavio Martinelli (LCN)

  • FlashMD: Long-stride, Universal Prediction of Molecular Dynamics – by Filippo Bigi (COSMO)

  • Inference-time Adaptive Tokenization via Online Compression – by Saibo Geng (DLab)

16:15 – 16:30 Break

16:30 – 17:30Talks Session 2/2 – (4 x 10 + 5 mins talks)

  • Flow based approach for Dynamic Temporal Causal models with non-Gaussian or Heteroscedastic Noises – by Abdellah Rahmani (LTS4)
  • Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions – by Skander Moalla & Semen Matrenok (CLAIRE)

  • The Nuclear Route: Sharp Asymptotics of ERM in Overparameterized Quadratic Networks – by Vittorio Erba (SPOC)
  • Which Algorithms Have Tight Generalization Bounds? – by Thomas Weinberger (LTHC)

17:30 – 19:00Poster session with apéro

Organizing Committee

Prof. Lenka Zdeborová, EPFL (Statistical Physics of Computation Lab) – ELLIS Fellow

Prof. Robert West, EPFL (Data Science Laboratory) – ELLIS Scholar

Prof. Volkan Cevher, EPFL (Information and Inference Systems Lab.) – ELLIS Fellow

Prof. Martin Schrimpf, EPFL (NeuroAI Lab) – ELLIS Scholar

Prof. Pascal Frossard, EPFL AI Center – ELLIS Fellow and Lausanne Unit Director

Coordination

Nicolas Machado, EPFL AI Center and ELLIS Lausanne Unit

Posters list

List updated on a rolling basis.

  1. With Limited Data for Multimodal Alignment, Let the STRUCTURE Guide YouFabian Gröger*, Shuo Wen*, Huyen Le, Maria Brbić

  2. Flow based approach for Dynamic Temporal Causal models with non-Gaussian or Heteroscedastic NoisesAbdellah Rahmani, Pascal Frossard

  3. Incremental Learning of Sparse Attention Patterns in TransformersOguz Kaan Yuksel, Rodrigo Alvarez Lucendo, Nicolas Flammarion

  4. ReservoirTTA: Prolonged Test-time Adaptation for Evolving and Recurring DomainGuillaume Vray∗, Devavrat Tomar∗, Xufeng Gao, Jean-Philippe Thiran, Evan Shelhamer, Behzad Bozorgtabar

  5. Double Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy – Rustem Islamov, Niccolò Ajroldi, Antonio Orvieto, Aurelien Lucchi
  6. ObjexMT: Objective Extraction and Metacognitive Calibration for LLM‑as‑a‑Judge under Multi‑Turn JailbreaksHyunjun Kim, Junwoo Ha, Sangyoon Yu, Haon Park

  7. Context aware geometric deep learning for RNA sequence designParth Bibekar, Lucien F. Krapp, Matteo Dal Peraro

  8. Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions – Simon Matrenok, Skander Moalla, Caglar Gulcehre

  9. Cross-dataset Multivariate Time-series Model for Parkinson’s Diagnosis via Keyboard DynamicsArianna Francesconi, Donato Cappetta, Fabio Rebecchi, Paolo Soda, Valerio Guarrasi, Rosa Sicilia
  10. Inference-Time Adaptive Tokenization via Online Compression Saibo Geng, Nathan Ranchin, Yunzhen Yao, Maxime Peyrard, Chris Wendler, Michael Gastpar, Robert West
  11. Learning with Restricted Boltzmann Machines: Asymptotics of AMP and GD in High DimensionsYizhou Xu, Florent Krzakala, Lenka Zdeborová
  12. Flat Channels to Infinity in Neural Loss LandscapesFlavio Martinelli, Alexander Van Meegen, Berfin Simsek, Wulfram Gerstner, Johanni Brea
  13. TokenSwap: A Lightweight Method to Disrupt Memorized Sequences in LLMsKaustubh Ponkshe, Parjanya Prashant, Babak Salimi
  14. Generating Directed Graphs with Dual Attention and Asymmetric EncodingAlba Carballo-Castro, Manuel Madeira, Yiming Qin, Pascal Frossard
  15. FlashMD: long-stride, universal prediction of molecular dynamicsFilippo Bigi, Sanggyu Chong, Agustinus Kristiadi, Michele Ceriotti
  16. Ascent Fails to ForgetIoannis Mavrothalassitis*, Pol Puigdemont*, Noam Itzhak Levi*, Volkan Cevher
  17. The Nuclear Route: Sharp Asymptotics of ERM in Overparameterized Quadratic NetworksVittorio Erba, Emanuele Troiani, Lenka Zdeborová, Florent Krzakala
  18. Towards End-to-End Learning of Structure-based Protein Sequence DesignJulius Wenckstern, Bruno Correia
  19. Which Algorithms Have Tight Generalization Bounds? – Michael Gastpar, Ido Nachum, Jonathan Shafer, Thomas Weinberger
  20. RAT: Bridging RNN Efficiency and AttentionAccuracy via Chunk-based Sequence ModelingXiuying Wei, Anunay Yadav, Razvan Pascanu, Caglar Gulcehre
  21. Neuro-Spectral Architectures for Causal Physics-Informed NetworksArthur Bizzi et al.
  22. Learning to Make Friends: Coaching LLM Agents toward Emergent Social TiesPhilipp J. Schneider, Lin Tian, Marian-Andrei Rizoiu
  23. GeRaF: Neural Geometry Reconstruction from Radio Frequency SignalsJiachen Lu, Hailan Shanbhag, Haitham Al Hassanieh
  24. Bayes optimal learning of attention-indexed modelsFabrizio Boncoraglio, Emanuele Troiani, Vittorio Erba, Lenka Zdeborová
  25. Not All LLM-Generated Data Are Equal: Rethinking Data Weighting in Text ClassificationHsun-Yu Kuo, Yin-Hsiang Liao, Yu-Chieh Chao, Wei-Yun Ma, Pu-Jen Cheng
  26. Unbiased and Sign Compression in Distributed Learning: Comparing Noise Resilience via SDEsEnea Monzio Compagnoni, Rustem Islamov, Frank Norbert Proske, Aurelien Lucchi
  27. Return of ChebNet: Understanding and Improving an Overlooked GNN on Long-Range TasksAli Hariri, Álvaro Arroyo, Alessio Gravina, Moshe Eliasof, Carola-Bibiane Schönlieb, Davide Bacciu, Kamyar Azizzadenesheli, Xiaowen Dong, Pierre Vandergheynst
  28. AugGen: Synthetic Augmentation using Diffusion Models Can Improve RecognitionParsa Rahimi, Damien Teney, Sebastien Marcel
  29. Robustness in Both Domains: CLIP Needs a Robust Text Encoder – Elias Abad Rocamora, Christian Schlarmann, Naman Deep Singh, Yongtao Wu, Matthias Hein, Volkan Cevher
  30. MEMOIR: Lifelong Model Editing with Minimal Overwrite and Informed Retention for LLMs – Ke Wang, Yiming Qin, Nikolaos Dimitriadis, Alessandro Favero, Pascal Frossard

Practical information