EPFL CIS NeurIPS 2022 Regional Post-Event

The Conference on Neural Information Processing Systems (NeurIPS) is the main machine learning conference.
While the conference is highly selective, EPFL researchers have 41 papers accepted for presentation at the conference in 2022.
We have decided, following up to the conference in the US, to organize a local mid-scale event on the EPFL campus, and to invite anyone who had an accepted contribution at NeurIPS to apply for one of our talk and/or poster slots*. Researchers from all institutions (not only EPFL) are welcome to apply and/or attend the event. Please note that travel costs cannot be reimbursed**.
The conference will give the occasion to all EPFL students and researchers with accepted papers at NeurIPS 2022 to present their works, and to all students and researchers interested in machine learning research to connect and discuss science during this one-day event.
Organizing committee:
Prof. Florent Krzakala, EPFL (Information, Learning & Physics Lab.)
Prof. Martin Jaggi, EPFL (Machine Learning & Optimization Lab.)
Prof. Volkan Cevher, EPFL (Information and Inference Systems Lab.)
Prof. Philippe Schwaller, EPFL (Artificial Chemical Intelligence Lab.)
Dr Jan Kerschgens, EPFL (CIS)
Prof. Pierre Vandergheynst, EPFL (Signal Processing Lab. 2)
Contact details:
[email protected]
*Posters of workshop papers are accepted as well.
**Travel grants for researchers from the Eurotech University Alliance are available. Please contact directly your Eurotech University Alliance Operational Board member with your request: https://eurotech-universities.eu/about-us/#tab-operations-board
Tentative program (subject to change)
13:00 – 13:30 Welcome coffee
13:30-14:05 Spotlight Talks Session 1/2: 7 x Spotlight session (5 mins each)
- PALMER: Perception-Action Loop with Memory for Long-Horizon Planning, Onur Beker
- Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs, Etienne Boursier
- Non-Gaussian Tensor Programs, Evgenii Golikov
- Uniform Convergence and Generalization Bounds for Nonconvex Stochastic Minimax Optimization, Yifan Hu
- What You See Is What You Get: Principled Deep Learning Via Distributional Generalization, Bogdan Kulynych
- Stochastic Second-Order Methods Improve Best-Known Sample Complexity of SGD for Gradient Dominated Functions, Saeed Masiha
- Subspace Clustering in high dimensions: Phase Transitions & Statistical-to-Computational gap, Luca Pesce
Small comfort break of 15 minutes
14:20-15:05 spotlight talks Session 2/2: 7 x Spotlight session (5 mins each)
- Natural image synthesis for the retina with variational information bottleneck representation, Babak Rahmani
- Predicting interaction partners among paralogs using masked language modeling, Damiano Sgarbossa
- Adaptive Stochastic Variance Reduction for Non-convex Finite Sum Minimization, Stratis Skoulakis
- Phase diagram of Stochastic Gradient Descent in high-dimensional two-layer neural networks, Ludovic Stephan
- Identifiability and Generalisability in Inverse Reinforcement Learning, Luca Viano
- Proximal Point Imitation Learning, Luca Viano
15:05-17:00 Coffee & poster sessions
Poster list
- GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation, Sina Sajadmanesh, Ali Shahin Shamsabadi, Aurélien Bellet, Daniel Gatica-Perez
- PALMER: Perception-Action Loop with Memory for Long-Horizon Planning, Onur Beker, Mohammad Mohammadi, Amir Zamir
- Constrained Efficient Global Optimization of Expensive Black-box Functions, Wenjie Xu, Yuning Jiang, Colin N Jones
- Lower Bounds on the Worst-Case Complexity of Efficient Global Optimization, Wenjie Xu, Yuning Jiang, Emilio T. Maddalena, Colin N. Jones
- Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs, Etienne Boursier, Loucas Pillaud-Vivien and Nicolas Flammarion
- Phase diagram of Stochastic Gradient Descent in high-dimensional two-layer neural networks, Rodrigo Veiga, Ludovic Stephan, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová
- What You See Is What You Get: Principled Deep Learning Via Distributional Generalization, Bogdan Kulynych, Yao-Yuan Yang, Yaodong Yu, Jaroslaw Blasiok, Preetum Nakkiran
- Proximal point imitation learning, Luca Viano, Angeliki Kamoutsi, Gergely Neu, Igor Krawzuck, Volkan Cevher
- Identifiability and generalizability in inverse reinforcement learning , Paul Rolland, Luca Viano, Norman Schuerhoff, Boris Nikolov, Volkan Cevher
- Trajectory Inference via Mean-field Langevin in Path Space, Lénaïc Chizat, Stephen Zhang, Matthieu Heitz, Geoffrey Schiebinger
- Task Discovery: Finding the Tasks that Neural Networks Generalize on, Andrei Atanov, Andrei Filatov, Teresa Yeo, Ajay Sohmshetty, Amir Zamir
- On the Double Descent of Random Features Models Trained with SGD, Fanghui Liu, Johan Suykens, Volkan Cevher
- Subspace clustering in high-dimensions: Phase transitions & Statistical-to-Computational gap, Luca Pesce, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová
- Stochastic Second-Order Methods Improve Best-Known Sample Complexity of SGD for Gradient Dominated Functions, Saeed Masiha, Saber Salehkaleybar, Niao He, Negar Kiyavash, Patrick Thiran
- Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization), Zhenyu Zhu, Fanghui Liu, Grigorios G Chrysos, Volkan Cevher
- Generalization Properties of NAS under Activation and Skip Connection Search, Zhenyu Zhu, Fanghui Liu, Grigorios G Chrysos, Volkan Cevher
- Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions, Nikolaos Karalias, Joshua Robinson, Andreas Loukas, Stefanie Jegelka
- Uniform Convergence and Generalization Bounds of Nonconvex Stochastic Minimax Optimization, Siqi Zhang, Yifan Hu, Liang Zhang, Niao He
- Generalised Implicit Neural Representations, Daniele Grattarola, Pierre Vandergheynst
- Mesoscopic modeling of hidden spiking neurons, Shuqi Wang, Valentin Schmutz, Guillaume Bellec, Wulfram Gerstner
- Learning features can lead to overfitting in neural networks , Leonardo Petrini, Francesco Cagnetta, Eric Vanden-Eijnden, Matthieu Wyart
- Predicting interaction partners among paralogs using masked language modeling, Umberto Lupo, Damiano Sgarbossa, Anne-Florence Bitbol
- DMAP: a Distributed Morphological Attention Policy for Learning to Locomote with a Changing Body, Alberto Silvio Chiappa, Alessandro Marin Vargas, Alexander Mathis
- Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations, S. Schotthöfer, E. Zangrando, J. Kusch, G. Ceruti, F. Tudisco
- SketchBoost: Fast Gradient Boosted Decision Tree for Multioutput Problems, Leonid Iosipoi and Anton Vakhrushev
- Modular Clinical Decision Support Networks,
- Data-heterogeneity-aware Mixing for Decentralized Learning, Yatin Dandi, Anastasia Koloskova, Martin Jaggi, Sebastian U Stich
- Robust Testing in High-Dimensional Sparse Models, Anand Jerry George, Clement Canonne
- Adaptive Stochastic Variance Reduction for Non-convex Finite Sum Minimization , Ali Kavis, Stratis Skoulakis, Kimon Antonakopoulos, Leello Dadi, Volkan Cevher
- Standardization of chemical compounds using language modeling, Miruna T. Cretu, Alessandra Toniato, Alain C. Vaucher, Amol Thakkar, Amin Debabeche, Teodoro Laino
- i saw late, no
- Identifiability and Generalisability in inverse reinforcement learning, Rolland, Viano, Schuerhoff, Nikolov, Cevher
- Proximal Point Imitation Learning, Viano, Kamoutsi, Neu, Krawczuk, Cevher
- Phase diagram of Stochastic Gradient Descent in high-dimensional two-layer neural networks, Rodrigo Veiga, Ludovic Stephan, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová
- Non-Gaussian Tensor Programs, Eugene Golikov, Greg Yang
- Understanding Deep Neural Function Approximation in Reinforcement Learning via -Greedy Exploration, Fanghui Liu, Luca Viano, Volkan Cevher
- Scalable Collaborative Learning via Representation Sharing, Frédéric Berdoz, Abhishek Singh, Martin Jaggi, Ramesh Raskar
- Task Discovery: Finding the Tasks that Neural Networks Generalize on, Andrei Atanov, Andrei Filatov, Teresa Yeo, Ajay Sohmshetty, Amir Zamir
- Title unknown, Frédéric Berdoz
Photos











