May, 31st 2021 at 16:30 CEST

On the Benefit of using Differentiable Learning over Tangent Kernels
Eran Malach,  Hebrew University

A popular line of research in recent years shows that, in some regimes, optimizing neural networks with gradient descent is equivalent to learning with the neural tangent kernel (NTK) –  a kernel induced by the network architecture and initialization. We study the relative power of learning with gradient descent on differentiable models, such as neural networks, versus using the corresponding tangent kernels. We show that under certain conditions, gradient descent achieves small error only if a related tangent kernel method achieves a non-trivial advantage over random guessing (a.k.a. weak learning), though this advantage might be very small even when gradient descent can achieve arbitrarily high accuracy. Complementing this, we show that without these conditions, gradient descent can in fact learn with small error even when no kernel method, in particular using the tangent kernel, can achieve a non-trivial advantage over random guessing.