Distribution Shifts

Machine learning models have achieved stunning successes in the IID setting. Yet, beyond this setting, existing models still suffer from two grand challenges: brittle under distributional shift and inefficient for knowledge transfer. Our recent research tackles these challenges with three different approaches, namely self-supervised learning, causal representation learning, and test-time adaptation. More specifically, we propose to incorporate prior knowledge of negative examples into representation learning [1], promote causal invariance and structure by making use of data from multiple domains [2], and exploit extra information besides model parameters for effective test-time adaptation [3,4]. These techniques have enabled deep neural networks to more robustly generalize and efficiently adapt to new environments for perception, prediction, and planning problems.

[1] Social NCE: Contrastive Learning of Socially-aware Motion Representations, ICCV, 2021.
[2] Towards Robust and Adaptive Motion Forecasting: A Causal Representation Perspective, CVPR, 2022.
[3] Collaborative Sampling in Generative Adversarial Networks, AAAI, 2020.
[4] TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?, NeurIPS, 2021.