Meta learning

Machine learning techniques, such as multi-task learning and meta learning, have been successful in enabling effective learning from limited data.

Multi-task learning

We study a linear low-dimensional shared representation model for multi-task learning. While prior work often yields weak estimation rates or requires many samples per task, we provide the first estimation error bound for the trace norm regularized estimator in the small-sample regime. This offers theoretical insight into the effectiveness of such shared representations in multi-task learning.

E. Boursier, M. Konobeev, N. Flammarion, Trace norm regularization for multi-task learning with scarce data, COLT 2022

Model agnostic Meta learning

Meta learning has shown strong empirical results in few-shot classification and reinforcement learning, with model-agnostic methods aiming to find initialization points for fast adaptation via gradient descent. Although these methods appear to learn shared representations, theoretical support has been lacking. We address this by proving that first-order ANIL, using a linear two-layer network, can successfully learn a linear shared representation, even with architecture misspecifications, demonstrating the effectiveness of model-agnostic methods in low-data settings.

O.K. Yüksel, E. Boursier, N. Flammarion First-order ANIL provably learns representations despite overparametrisation, ICLR 2024