Imaging Seminar: Imaging in the Age of Machine Learning

Date     25 October 2019 at 16:00 

Chairs     Prof. Suliana Manley (LEB)  and Prof. Michael Unser (BIG) 

Contact     [email protected]  

Recording   Please find below

Prof. Florian JUG 

Max Planck Institute, Dresden, Germany

Content-Aware Image Restoration for Light and Electron Microscopy

In recent years, fluorescent light microscopy and cryo-electron microscopy saw tremendous technological advances. Using light microscopes, we routinely image beyond the resolution limit, acquire large volumes at high temporal resolution, and capture many hours of video material showing processes of interest inside cells, in tissues, and in developing organisms. Cryo-electron microscopes, at the same time, are capable of visualizing cellular building-blocks in their native environment at close to atomic resolution. Despite these possibilities, the analysis of raw images is usually non-trivial, error-prone, and cumbersome.

Here we show how machine learning, ie. neural networks, can help to tap the full potential of raw microscopy data by applying content-aware image restoration (CARE) techniques. Several examples in the context of light microscopy (LM) and cryo-electron microscopy (EM) illustrate how downstream analysis pipelines lead to improved (automated) results when applied to content-aware restorations.

While our recently published results on LM data do profit from the fact that single high-quality, low-noise acquisitions can directly be recorded, in other occasions, this is not possible (e.g. for cryo-EM). Hence, we developed CARE variations that do not require the acquisition of high-quality examples but can be trained from noisy images alone. We strongly believe that these and similar approaches will continue to have a profound influence on how imaging heavy scientific projects can and will go beyond the physical limitations of modern microscopy.


Prof. Anders HANSEN 

University of Cambridge, United Kingdom 

On instabilities, paradoxes and barriers in deep learning

Deep learning has had unprecedented success, however, seems to provide universally unstable methods in fields ranging from voice recognition via automated diagnosis in medicine to inverse problems and imaging. We will discuss this highly complex issue and provide mathematical explanations for the phenomenon.

Intriguingly, the reasons for the instabilities vary depending on the application. However, a common phenomenon is that the instabilities are not caused by the lack of approximation power of neural networks. Indeed, it seems like a paradox that despite the unstable trained networks, there will typically exist other stable and accurate networks for the same applications. The problem is that the training process does not construct them.

Herein lies a fascinating barrier. Despite the rich collection of results from approximation theory regarding existence of neural network with powerful approximation and stability guaranties, it can be shown that many of these networks cannot be computed by a computer regardless of computing power. Thus, theoretical results a la “there exists a neural network with the following properties” do not mean that such a network can ever be computed on a digital computer.

We are therefore left with the fundamental question: can stable and accurate neural networks be computed for the many problems where deep learning is currently used, or is instability a necessary artefact in modern artificial intelligence?