Past Talks

 


 

 

 

Prof. Artur Avila

 

University of Zurich & IMPA
December 5th, 2022
17:00 – CM 1 5

Renormalization, fractal geometry, and the Newhouse phenomenon

As discovered by Poincaré in the end of the 19th century, even small perturbations of very regular dynamical systems may display chaotic features, due to complicated interactions near a homoclinic point.  In the 1960s, Smale attempted to understand such dynamics in term of a stable model, the horseshoe, but this was too optimistic.  Indeed, Newhouse showed that even in only two dimensions, a homoclinic bifurcation gives rise to particular wild dynamics, such as the generic presence of infinitely many attractors.  This Newhouse phenomenon is associated to a renormalization mechanism, but also with particular geometric properties of some fractal sets within a Smale horseshoe.  When considering two-dimensional complex dynamics those fractal sets become much more beautiful but unfortunately also more difficult to handle.


Prof. Bryna Kra

 

Northwestern University
Tuesday, June 21st, 2022
17:15 – CM 1 4

The Erdos sumset conjecture and its generalizations

A striking example of the interactions between additive combinatorics and ergodic theory is Szemeredi’s Theorem that a set of integers with positive upper density contains arbitrarily long arithmetic progressions. Soon thereafter, Furstenberg used Ergodic Theory to gave a new proof of this result, leading to the development of combinatorial ergodic theory. These tools have led to uncovering new patterns that must occur in sufficiently large sets of integers and an understanding of what types of structures control these behaviors. Only recently have we been able to extend these methods to infinite patterns, and in recent work we show that any set of integers with positive upper density contains a k-fold sumset. This is joint work with Joel Moreira, Florian Richter, and Donald Robertson.


Prof. Claire Voisin

 

CNRS, Paris Jussieu
Thursday, June 16th, 2022
17:00 – CE 1 6

On the complex cobordism classes of hyper-Kähler manifolds

Hyper-Kähler manifolds are symplectic holomorphic compact Kähler manifolds, a particular class of complex manifolds with trivial canonical bundle. They exist only in even complex dimension, and there are two main series of known deformation classes of hyper-Kähler manifolds, with one model in each even dimension, that I will describe. I will discuss in this introductory talk a result obtained with Georg Oberdieck and Jieao Song on the complex cobordism classes of hyper-Kähler manifolds, and present a number of open questions concerning their Chern numbers.


Prof. Jean-Michel Coron

 

Université Pierre et Marie Curie
Thursday, June 2nd, 2022
17:15 – MA B1 11

Stabilization of control systems: from the clepsydra to river regulation

A control system is a dynamic system that can be acted upon using controls. For these systems, a fundamental problem is the question of stabilization: is it possible to stabilize a given unstable equilibrium using appropriate feedback laws? (Think of the classical experiment of a broomstick on the fingertip.) On this problem, we present some old devices and pioneering works (Ctesibius, Watt, Maxwell, Lyapunov…), more recent results, and an application to the regulation of the rivers La Sambre and La Meuse. The focus is on the positive or negative effects of non-linearities.


Dr. Laura Grigori

 

INRIA-Paris
Thursday, April 28th, 2022
16:15 – CM 1 4

Randomization techniques for large scale linear algebra

This talk will discuss several recent advances in using randomization and communication avoiding techniques for solving large scale linear algebra operations. It will focus in particular on solving linear systems of equations, eigenvalue problems, and computing the low rank approximation of a large matrix. In the context of linear systems of equations, we discuss a randomized Gram-Schmidt process and show that it is efficient as classical Gram-Schmidt and numerically stable as modified Gram-Schmidt. We exploit the usage of mixed precision in this context and discuss its usage in linear solvers. We then discuss a block orthogonalization method and its usage for solving eigenvalue problems. We also address the problem of preconditioning. The usage of these methods in challening application is further discussed.


Prof. Christian Lubich

 

University of Tübingen
Monday, March 7th, 2022
16:15 – CM 1 5

Robust time integration for dynamical low-rank approximation

The talk begins with a brief review of dynamical low-rank
approximation for matrix (and tensor) differential equations,
emphasizing the difficulties created by the ubiquitous appearance of
small singular values. These difficulties are overcome by the
projector-splitting integrator, which splits the projection onto the
tangent space of the low-rank manifold into an alternating sum of
subprojections. It has a surprising exactness property and is robust
to small singular values. A very recent robust integrator is the
“unconventional” low-rank integrator, which uses a basis-update and
Galerkin approach. It shares the robust error bounds with the
projector-splitting integrator but is more parallel and avoids the
backward substep that appears problematic for strongly dissipative
problems. Augmenting the Galerkin step to the larger subspace
generated by both the old and new bases and using a
tolerance-controlled rank truncation after the step yields a novel
rank-adaptive integrator with remarkable (near-)conservation
properties. Numerical experiments illustrate the theoretical results.
While the talk will be restricted to dynamical low-rank approximation
for matrix differential equations in order to highlight basic
construction principles, the methods discussed here can all be
nontrivially extended to robust time integration methods for general
tree tensor network approximations.

This talk is based on joint work with Ivan Oseledets and with Emil
Kieri and Hanna Walach (projector-splitting integrator), with Gianluca
Ceruti (unconventional integrator, rank-adaptive integrator) and Jonas
Kusch (rank-adaptive integrator). Recent work on time integration of
general tree tensor networks is jointly with Hanna Walach, Gianluca
Ceruti and Dominik Sulz.


Prof. Maria Colombo

 

EPFL
Friday, June 25th, 2021
15:15 – zoom link

Irregular solutions of the transport and Navier-Stokes equations

The understanding of irregular solutions in fluid dynamics is a formidable challenge in the analysis of PDEs and has intriguing connections with the description of turbulent phenomena. For the transport equation, a PDE whose analysis lies at the basis of nonlinear PDEs such as Euler and Navier-Stokes, this entangles with a suitable notion of flow of the ODE associated to a nonsmooth vector field. I will introduce the topic, describe some of the main challenges of the theory related to uniqueness issues and present a versatile method for the construction of irregular solutions, together with its consequences.

Code for the seminar : 827479
For any problem, please contact : [email protected]


Prof. Zsolt Patakfalvi

 

EPFL
Friday, April 16th, 2021
11:15

The classification theory of algebraic varieties

As in many parts of mathematics, developing a good classification theory of the main objects of the field is a principal goal of algebraic geometry. In the case of algebraic geometry these objects are called algebraic varieties. I will start with an explanation on the notion of algebraic varieties. I will continue by presenting a rough outline of the classification theory, including a selected list of my contributions. Finally, I will discuss some ideas leading to the proofs of the latter.


Journée Georges de Rham 2020

Wednesday, September 23rd – Geneva (Uni Bastions)

16:15 – 19:30

Prof. James Maynard (University of Oxford) and Prof. Corinna Ulcigrai (University of Zurich)

More information here


Campus Lecture

Prof. Alessio Figalli

 

ETHZ
Monday, November 11th, 2019
17:15 – Forum Rolex

From Optimal Transport to Soap Bubbles and Clouds : A Personal Journey

In this talk I’ll give a general overview, accessible also to non-specialists, of the optimal transport problem. Then I’ll show some applications of this theory to soap bubbles (isoperimetric inequalities) and clouds (semigeostrophic equations), problems on which I worked over the last 10 years. Finally, I will conclude with a brief description of some results that I recently obtained on the study of ice melting into water.


Journée Georges de Rham 2019

Wednesday, May 15 CO1

15:20 – 18:15

Prof. Claudia de Rham (Imperial College London) and Prof. Martin Hairer (Imperial College London)

More information here: https://www.epfl.ch/schools/sb/research/math/de-rham/


Prof. Weinan E

 

Princeton University
Thursday, November 15th, 2018
17:15 – CM4

Machine Learning and Multi-scale Modeling

Multi-scale modeling is an ambitious program that aims at unifying the different physical models at different scales for the practical purpose of developing accurate models and simulation protocals for properties of interest. Although the concept of multi-scale modeling is very powerful and very appealing, practical success on really challenging problems has been limited. One key difficulty has been our limited ability to represent complex models and complex functions.

In recent years, machine learning has emerged as a promising tool to overcome the difficulty of representing complex functions and complex models. In this talk, we will review some of the successes in applying machine learning to multi-scale modeling. These include molecular dynamics and model reduction for PDEs.

Another important issue is the mathematical foundation of modern machine learning, particularly in the over-parametrized regime where most of the deep learning models lie. I will also discuss our current understanding on this important issue.


Prof. Demetrios Christodoulou

 

ETHZ
Tuesday, October 2nd, 2018
17:15 – CM4

Gravitational Waves and Black Holes – the Legacy of General Relativity

While the Newtonian theory of gravity is similar to electrostatics, the field equations in both cases being elliptic, Einstein’s general relativity is more similar to Maxwell’s electromagnetic theory, the field equations in both cases being of hyperbolic character. A consequence of this fact is the presence in general relativity of gravitational waves, which are analogous to electromagnetic waves. Due however to the nonlinear nature of Einstein’s equations, a long time elapsed before the theory of gravitational waves reached a satisfactory level of understanding. Another fundamental feature of general relativity is that gravitational collapse leads to the formation of spacetime regions which are inaccessible to observation from infinity, the black holes. In the last two years, on several occasions, wave trains have been detected generated by binary black holes which inspiral and coalesce. In my lecture, I shall describe the basic concepts, give an outline of their historical development, and discuss how the theory connects to the recent observations.


Prof. Ian H. Sloan

 

UNSW Australia – The University of New South Wales
Wednesday, September 26th, 2018
17:15 – CM4

How high is high dimensional

What does high dimensional mean? How high is “high dimensional”? In this lecture I will give a rather personal view of how our ideas of high dimensionality have changed, with particular reference to recent developments in Quasi-Monte Carlo (QMC) methods for high-dimensional integration.


Prof. Charles L. Epstein

 

University of Pennsylvania
Tuesday, September 18th, 2018
17:15 – CM4

The Geometry of the Phase Retrieval Problem

One of the most powerful approaches to imaging at the nanometer or subnanometer length scale is coherent diffraction imaging using X-ray sources. For amorphous (non-crystalline) samples, the raw data can be interpreted as the modulus of the continuous Fourier transform of the unknown object. Making use of prior information about the sample (such as its support), a natural goal is to recover the phase through computational means, after which the unknown object can be visualized at high resolution. While many algorithms have been proposed for this phase retrieval problem, careful analysis of its well-posedness has received relatively little attention. In fact the problem is, in general, not well-posed and describe some of the underlying geometric issues that are responsible for the ill-posedness. We then show how this analysis can be used to develop experimental protocols that lead to better conditioned inverse problems.


Prof. Stefaan Vaes

 

KU Leuven
Wednesday, April 11th, 2018
17:15 – BI A0 448

Classification of von Neumann algebras

The theme of this talk is the dichotomy between amenability and non-amenability. Because the group of motions of the three-dimensional Euclidean space is non-amenable (as a group with the discrete topology), we have the Banach-Tarski paradox. In dimension two, the group of motions is amenable and there is therefore no paradoxical decomposition of the disk. This dichotomy is most apparent in the theory of von Neumann algebras: the amenable ones are completely classified by the work of Connes and Haagerup, while the non-amenable ones give rise to amazing rigidity theorems, especially within Sorin Popa’s deformation/rigidity theory. I will illustrate the gap between amenability and non-amenability for von Neumann algebras associated with countable groups, with locally compact groups, and with group actions on probability spaces.


Prof. Marie-France Vigneras

 

Paris 6
Thursday, February 1st, 2018
17:15 – CM4

Existence de représentations supersingulières sur un corps de caractéristique p

La classification par Breuil de ces représentations pour GL(2,Qp) a été le point de départ de la correspondance de Langlands p-adique reliant représentations de GL(2,Qp) et représentations du groupe de Galois absolu Gal de Qp. Elles correspondent aux représentations irréductibles de dimension 2 de Gal sur un corps de caractéristique p. Pourquoi ces représentations sont-elles appelées supersingulières? Quels sont les groupes réductifs p-adiques qui en possèdent? Les exemples connus ne concernaient que des groupes de rang 1. Je présenterai une méthode qui permet d’en construire pour “presque tous” les groupes réductifs p-adiques.


Prof. Clément Hongler

 

EPFL
Wednesday, January 25th, 2018
15:00 – CM4

Statistical Field Theory and the Ising Model 

The developments of statistical mechanics and of quantum field theory are among the major achievements of 20th century’s science. In the second half of the century, these two subjects started to converge, resulting in some of the most remarkable successes of mathematical physics. At the heart of this convergence lies the conjecture that critical lattice models are connected, in the continuous limit, to conformally symmetric field theories. This conjecture has led to much insight into the nature of phase transitions and to beautiful formulae describing lattice models, which have remained unproven for decades.

In this talk, I will focus on the planar Ising model, perhaps the most studied lattice model, whose investigation has initiated much of the research in statistical mechanics. I will explain how, in the last ten years, we have developed tools to understand mathematically the emerging conformal symmetry of the model, and the connections with quantum field theory. This has led one to the proof of celebrated conjectures for the Ising correlations and for the description of the emerging random geometry. I will then explain how these tools have then yielded a rigorous formulation of the field theory describing this model, allowing one to make mathematical sense of the seminal ideas at the root of the subject of conformal field theory.


Prof. Sara Zahedi

 

KTH Royal Institute of Technology
Wednesday, November 1st, 2017
17:15 – CM5

Cut Finite Element Methods

In this talk I will present a new type of finite element methods that we refer to as Cut Finite Element Methods (CutFEM). CutFEM provides an efficient strategy for solving partial differential equations in evolving geometries. In CutFEM the dynamic geometry is allowed to cut through the background grid in an arbitrary fashion and remeshing processes are avoided. We will consider convection-diffusion equations on evolving surfaces (or interfaces) modeling the evolution of insoluble surfactants and a space-time cut finite element method based on quadrature in time. A stabilization term is added to the weak formulation which guarantees that the linear systems resulting from the method have bounded condition numbers independently of how the geometry cuts through the background mesh for linear as well as higher order elements.


Journée Georges de Rham 2017

Wednesday, March 8 CO2

15:20 – 18:15

Prof. Stéphane Mallat (ENS) and Prof. Sergei Tabachnikov (Penn State)

More information here: https://www.epfl.ch/schools/sb/research/math/de-rham/


Prof. Axel Munk

 

University of Goettingen and Max-Planck Institute for Biophysical Chemistry
Tuesday, February 14th, 2017
17:15 – CM4

Nanostatistics – Statistics for Nanoscopy

Conventional light microscopes have been used for centuries for the study of small length scales down to approximately 250 nm. Images from such a microscope are typically blurred and noisy, and the measurement error in such images can often be well approximated by Gaussian or Poisson noise. In the past, this approximation has been the focus of a multitude of deconvolution techniques in imaging. However, conventional microscopes have an intrinsic physical limit of resolution. Although this limit remained unchallenged for a century, it was broken for the first time in the 1990s with the advent of modern superresolution fluorescence microscopy techniques. Since then, superresolution fluorescence microscopy has become an indispensable tool for studying the structure and dynamics of living organisms, recently acknowledged with the Nobel prize in chemistry 2014. Current experimental advances go to the physical limits of imaging, where discrete quantum effects are predominant. Consequently, the data is inherently of a non-Gaussian statistical nature, and we argue that recent technological progress also challenges the long-standing Poisson assumption. Thus, analysis and exploitation of the discrete physical mechanisms of fluorescent molecules and light, as well as their distributions in time and space, have become necessary to achieve the highest resolution possible and to extract biologically relevant information.

In this talk we survey some modern fluorescence microscopy techniques from a statistical modeling and analysis perspective. In the first part we spatially adaptive multiscale deconvolution estimation and testing methods for scanning type microscopy. We illustrate that such methods benefit from recent advances in large-scale computing, mainly from convex optimization. In the second part of the talk we address of quantitative biology which require more detailed models that delve into sub-Poisson statistics. To this end we suggest a prototypical model for fluorophore dynamics and use it to quantify the number of proteins in a spot.


 
Prof. Hoài-Minh Nguyên

 

EPFL
Monday December 12th, 2016
18:30 – CM1

Negative index materials and their applications : recent mathematical progress

The study of these materials has attracted a lot of attention in the scientific community not only because of their many potential interesting applications but also because of challenges in understanding their appealing properties due to the sign-changing coefficients in equations describing their properties. In this talk, I give a survey on recent mathematical progress in understanding applications and properties of these materials. In particular, I discuss superlensing and cloaking using complementary media, cloaking an object via anomalous localized resonance, the possibility that a lens can become a cloak and conversely, and various conditions on the stability of these materials. classical complexity barrier in continuous optimization is between convex (solvable in polynomial time) and nonconvex (intractable in the worst-case setting). However, many problems of interest are not constructed adversarially, but instead arise from probabilistic models of scientific phenomena. Can we provide rigorous guarantees for such random ensembles of nonconvex optimization problems? In this talk, we survey various positive answers to this question, including optimal results for sparse regression with nonconvex penalties, and direct approaches to low-rank matrix recovery. All of these results involve natural weakenings of convexity that hold for various classes of nonconvex functions, thus shifting the barrier between tractable and intractable.


Prof. János Kollár

 

Princeton University
Wednesday, November 9th, 2016
16:15 – MEB 331

Celestial surfaces

We discuss a project, started by Kummer and Darboux, to describe all surfaces that contain at least 2 circles through every point.


Prof. Anton Alekseev

 

Université de Genève
Wednesday, October 19th, 2016
17:15 – CE2

Inequalities: from Hermitian matrices to planar networks

The same set of inequalities comes up in two problems of very different nature. The first one is the Horn problem in Linear Algebra asking for possible eigenvalues of a sum of two Hermitian matrices with given spectra. This problem has a rich history dating back to the work by H. Weyl in 1912. A complete solution was obtained in 1998 by Klyachko and by Knutson-Tao. The second problem is related to combinatorics of paths in weighted planar networks (a special type of planar graphs).

In the talk, we shall introduce the two problems and explain the relation between them which goes via symplectic geometry, the theory of total positivity and cluster algebras.


Prof. Arnaud Doucet

 

University of Oxford
Monday, October 17th, 2016
17:15 – MA11

The Correlated Pseudo-Marginal Method for Inference in Latent Variable Models

The pseudo-marginal algorithm is a popular variant of the Metropolis-Hastings scheme which allows us to sample asymptotically from a target probability density when we are only able to estimate an unnormalized version of this target unbiasedly. It has found numerous applications in Bayesian statistics as there are many scenarios where the likelihood function is intractable but can be estimated unbiasedly using Monte Carlo samples. For a fixed computing time, it has been shown in several recent contributions that an efficient implementation of the pseudo-marginal method requires the variance of the log-likelihood ratio estimator appearing in the acceptance probability of the algorithm to be of order 1, which in turn requires scaling the number of Monte Carlo samples linearly with the number of data points. We propose a modification of the pseudo-marginal algorithm, termed the correlated pseudo-marginal algorithm, which is based on a novel log-likelihood ratio estimator computed using the difference of two positively correlated log-likelihood estimators. This approach allows us to scale the number of Monte Carlo samples sub-linearly with the number of data points. A non-standard weak convergence analysis of the method will be presented. In our numerical examples, the efficiency of computations is increased relative to the pseudo-marginal by up to several orders of magnitude for large datasets.

This is joint work George Deligiannidis and Michael K. Pitt: http://arxiv.org/abs/1511.04992


Prof. Joel Spencer

 

New York University
Monday, October 10th, 2016
17:15 – CM1

Counting Connected Graphs

Let C(n,k) be the number of labelled connected graphs with n vertices and n-1+k edges. For k=0 (trees) we have Cayley’s Formula. We examine the asymptotics of C(n,k). There are several ranges depending on the asymptotic relationship between n and k. The approaches are a virtual cornucopia of modern probabilistic techniques. These include supercritical dominant components in random graphs, local limit laws, Brownian excursions, Parking functions and more.


Prof. Jonathan Rougier

 

University of Bristol
Thursday, June 30th, 2016
16:15 – CM4

A statistician’s viewpoint on weather, climate, and climate simulations

There is plenty of agreement about what we mean by ‘weather’, some agreement about ‘climate’, and quite a lot of confusion about ‘climate simulations’ – not about what they are, but about what they mean. Statisticians are very familiar with the underlying issues, which centre on uncertainty and our attempts to define and quantify it. I propose that climate simulators represent expert opinions about future weather, and should be treated accordingly. This in turn requires that we expose the inherently probabilistic nature of a climate simulator, so that we can interpret it as offering a set of bets on future weather outcomes. I will assess current practice in climate science from this perspective.


Prof. Maryna Viazovska

 

Humboldt University of Berlin
Thursday May 26th, 2016
14:15 – MA11

The sphere packing problem in dimensions 8 and 24

In this talk we will show that the sphere packing problem in dimensions 8 and 24 can be solved by a linear programming method. In 2003 N. Elkies and H. Cohn proved that the existence of a real function satisfying certain constraints leads to an upper bound for the sphere packing constant. Using this method they obtained almost sharp estimates in dimensions 8 and 24. We will show that functions providing exact bounds can be constructed explicitly as certain integral transforms of modular forms. Therefore, we solve the sphere packing problem in dimensions 8 and 24.


Prof. Christof Schütte

 

Zuse Institute Berlin
Wednesday, May 25th, 2016
17:00 – CM1

Computational Molecular Design: Mathematical Theory, High Performance Computing, In Vivo Experiments

Molecular dynamics and related computational methods enable the description of biological systems with all-atom detail. However, these approaches are limited regarding simulation times and system sizes. A systematic way to bridge the micro-macro scale range between molecular dynamics and experiments is to apply coarse-graining (CG) techniques. We will discuss Markov State Modelling, a CG technique that has attracted a lot of attention in physical chemistry, biophysics, and computational biology in recent years.

First, the key ideas of the mathematical theory and its algorithmic realization will be explained, next we will discuss the question of how to apply it to understanding ligand-receptor binding, and last we will ask whether this may help in designing ligands with prescribed function.
All of this will be illustrated by telling the story of the design process of a pain relief drug without concealing the potential pitfalls and obstacles.


Prof. Karl Kunisch

 

Karl-Franzens Universität Graz
Thursday, May 19th, 2016
14:15 – MED 0 1418

On sparsity constrained PDE control

It is only since recently that the choice of the control cost functional in optimal control problems related to partial differential equations is receiving special attention. It has been recognized that the use of non-smooth functionals is of particular interest. L1 – and measure valued controls lead the sparsity properties of the optimal controls. This can be used to design “cheap” controls, or to solve optimal actuator or inverse source problems. Non-smooth costs are also efficient to address multi-bang control and switching control problems.

To cope with the resulting numerical challenges we propose the use of semi-smooth Newton methods. Applications to optimal control for the wave equation and to quantum control problems will be presented.


Sir John Ball

 

University of Oxford
Tuesday March 8th, 2016
17:15 – CM4

Interfaces and metastability in solid and liquid crystal

When a new phase is nucleated in a martensitic solid phase transformation, it has to fit geometrically onto the parent phase, forming interfaces between the phases accompanied by possibly complex microstructure. The talk will describe some mathematical issues involved in understanding such questions of compatibility and their influence on metastability, as illustrated by recent experimental discoveries.

For liquid crystals planar (as opposed to point and line) defects are not usually considered, but there are some situations in which they seem to be relevant, such as for smectic. A thin films where compatibility issues not unlike those for martensitic materials arise.


Prof. Alexander Braverman

 

University of Toronto
Monday, March 7th, 2016
16:15 – CM4

The Tamagawa Number Formula for Affine Kac-Moody Groups (joint work with D. Kazhdan)

Let G be an algebraic semi-simple group (e.g. G = SL(n)). Let also F be a global field (e.g. F = Q) and let A denote its adele ring. The “usual” Tamagawa number formula (proved by Langlands in 1966) computes the (suitably normalized) volume of the quotient G(A)/G(F) in terms of values of the zeta-function of F at certain numbers, called the exponents of G (these numbers are equal to 2, 3,…, n when G = SL(n)). When F is the field of rational functions on an algebraic curve X over a finite field, this computation is closely related to the so called Atiyah-Bott computation of the cohomology of the moduli space of G-bundles on a smooth projective curve.

After explaining the above results I am going to present a (somewhat indirect) generalization of the Tamagawa formula to the case when G is an affine Kac-Moody group and F is a function field. Surprisingly, the proof heavily uses the so called Macdonald constant term identity. We are going to discuss possible (conjectural) geometric interpretations of this formula (related to moduli spaces of bundles on surfaces).


Prof. Florian Pop

 

University of Pennsylvania
Thursday March 3rd, 2016
17:15 – MA11

First order effectiveness in arithmetic geometry

In my talk I plan to present / explain one of the most fundamental open questions about first order effectiveness in arithmetic geometry, the so called “elementary equivalence versus isomorphism problem.” The problem is, among other things, about giving uniform concrete formulas which characterize the number fields, the function fields of arithmetic curves, etc., up to isomorphism. The question in general is wide open, but there is relatively recent promising progress, on which I plan to report.


Prof. Martin Wainwright

 

University of California at Berkeley
Thursday November 5th, 2015
17:15 – CM5

Statistical estimation in high dimensions: Rigorous results for nonconvex optimization

The classical complexity barrier in continuous optimization is between convex (solvable in polynomial time) and nonconvex (intractable in the worst-case setting). However, many problems of interest are not constructed adversarially, but instead arise from probabilistic models of scientific phenomena. Can we provide rigorous guarantees for such random ensembles of nonconvex optimization problems? In this talk, we survey various positive answers to this question, including optimal results for sparse regression with nonconvex penalties, and direct approaches to low-rank matrix recovery. All of these results involve natural weakenings of convexity that hold for various classes of nonconvex functions, thus shifting the barrier between tractable and intractable.


Rutgers University, New Jersey
Thursday, April 30th, 2015
17:15 – BI A0 448

The exceptional Character Extravaganza

One real character of Dirichlet makes a great noise in analytic number theory. Nobody knows how to silent it without recourse to the Riemann hypothesis. During the lecture I will show two sides of the story. If this character does not exist we may derive fundamental results about the class number of ideals in imaginary quadratic forms. If it does exist, we have tools for wonderful results about prime numbers which go beyond the threshold of the Riemann hypothesis.


Prof. Hiraku Nakajima

 

Kyoto University
Thursday, March 26th, 2015
17:15 – BI A0 448

Topological quantum field theories and Coulomb branches

Topological quantum field theories (TQFT) are physical theories to give (differential) topological invariants of low-dimensional manifolds. Very roughly they are considered as Chern-Weil theory formally applied to infinite dimensional vector bundles over infinite dimensional spaces. I am recently interested in what physicists call Coulomb branches of TQFT. They are physically important, and also supposed to be relevant to understanding of TQFT.

Prof. Jean-Marc Schlenker

 

University of Luxembourg
Thursday, February 26th, 2015
17:15 – MA11

Three applications of anti-de Sitter geometry

We will describe three recent and apparently unconnected results: on the combinatorics of polyhedra inscribed in a quadric, on a canonical extension to the disk of homeomorphisms of the circle, and on the topology of moduli spaces of surface group representations. What those results have in common is that their proofs are all based on 3-dimensional anti-de Sitter geometry, the Lorentzian cousin of hyperbolic geometry. We will briefly describe the historical origin and a few striking properties of this geometry.

Prof. Peter Bühlmann

 

ETH Zürich
Thursday, November 13th, 2014
17:15 – CM4

The Mathematics of High-Dimensional Statistics

In many areas of science, data arises which are used for inference in complex models. The statistical inference problem is high-dimensional if the number of parameters in the model exceeds the number of observations in the data. While standard statistical procedures fail in such cases, sparse methods have proven to be successful for a broad spectrum of applications, including the celebrated compressed sensing methodology (Candes, Romberg and Tao, 2006; Donoho, 2006). Sparsity is beneficial for complexity regularization leading to near optimal statistical performance, for efficient computation, and it also plays a crucial role for quantifying uncertainties with statistical confidence statements. We explain the main principles, the corresponding mathematical developments (random matrix theory, concentration inequalities), and we will illustrate the methods in an application from genetics.


Prof. Andrew Hodges

 

University of Oxford
Thursday, November 6th, 2014
17:15 – CM4

Amplitudes, twistors and graphs

Since 2008 the theory of scattering amplitudes for the most fundamental quantum fields has been transformed. The old theory of Feynman diagrams, whilst correct, is completely intractable for non-Abelian gauge fields. A new theory based on twistor geometry, exploiting the hidden conformal symmetries of the physics, allows enormous simplifications. As one aspect of this theory, the ‘twistor diagrams’, proposed by Penrose over 40 years ago, turn out to have a remarkable new graph-theoretic interpretation. Since 2012, Arkani-Hamed et al. have shown this to lead to a quite new geometrical construction: the ‘Amplituhedron’.


Prof. Michael Overton

 

Courant Institute (New York University)
Thursday, October 2nd, 2014
17:15 – CM4

Nonsmooth Optimization and Crouzeix’s Conjecture

There are many algorithms for minimization when the objective function is differentiable, convex, or has some other known structure, but few options when none of the above hold, particularly when the objective function is nonsmooth at minimizers, as is often the case in applications. BFGS is a well known optimization method, developed for smooth problems, but which is remarkably effective for nonsmooth problems too, despite a lack of convergence theory. We apply BFGS to investigate a challenging problem in the theory of non-normal matrices called Crouzeix’s conjecture, which we will explain in some detail. We compute the Crouzeix objective function using CHEBFUN, a very useful tool that we will also discuss briefly.


Prof. Richard James

 

University of Oxford
Thursday, May 1st, 2014
17:15 – CM4

Materials from Mathematics

We present some recent examples of new materials whose synthesis was guided by some essentially mathematical ideas involving ideas from group theory, calculus of variations, pde and applied mathematics. They are materials that undergo phase transformations from one crystal structure to another, with a change of shape but without diffusion. They are hard materials, but nevertheless show liquid-like changes of microstructure under a fraction of a degree change of temperature. The underlying mathematical theory was designed to identify alloys that show exceptional reversibility of the phase transformation. The new alloys do show unprecedented levels of reversibility, but also raise fundamental new questions for mathematical theory. Some of these alloys can be used to convert heat to electricity (without the need of a separate electrical generator), and provide interesting possible ways to recover the vast amounts of energy stored on earth stored at small temperature difference. The lecture will be mathematically/experimentally nontechnical and suitable for a broad audience. (http://www.aem.umn.edu/~james/research/).

Prof. Michel Broué

 

Université Paris Diderot
Thursday, March 13th, 2014
17:15 – CM4

Des nouvelles de Spetses, ou GL_n(x) _n(x) pour xx une indéterminée?

Soit GL_n(q) ke groupe des matrices nxn

inversibles à coefficients dans un corps fini à q q éléments. L’ordre de GL_n(q) _n(q) est la valeur en x=q du polynôme x^binom{n}{2} prod_i^n (x^i – 1)

Non seulement les ordres des sous-groupes « naturels » , mais également les théorèmes de Sylow, les dimensions des représentations irréductibles (complexes) (jusqu’aux représentations modulaires — représentations en caractéristique non nulle) de GL_n(q) peuvent, de manière analogue, être décrits par des polynômes évalués en x=q. Comme s’il y avait un objet GL_n(q) qui se spécialiserait en GL_n(q)

pour x=q. Des phénomènes identiques peuvent être observés pour tous les autres groupes de type de Lie sur les corps finis, qui sont construits à partir des groupes de Weyl. Depuis vingt ans, on a entrepris de construire des données polynomiales analogues, non seulement pour les groupes de Weyl, mais pour les autres groupes de Coxeter finis, et même pour des groupes engendrés par des pseudo-réflexions: c’est le programme baptisé « Spetses », au sujet duquel on espère pouvoir dire quelques mots.


Prof. Emmanuel Kowalski

 

ETH Zurich
Thursday, December 5th, 2013
17:15 – CM4

Écarts entre nombres premiers, d’après Y. Zhang et J. Maynard

L’exposé rappellera certaines des questions classiques concernant les nombres premiers et présentera les approches de théorie analytique des nombres qui ont été découvertes pour étudier ces questions. On parlera ensuite tout spécialement des extraordinaires résultats récents qui permettent de montrer l’existence d’une infinité de paires de nombres premiers à distance bornée l’un de l’autre.

Prof. Bernd Sturmfels

 

University of California, Berkeley
Wednesday, October 23rd, 2013
17:15 – CM5

Maximum Likelihood for Matrices with Rank Constraints

Maximum likelihood estimation is a fundamental computational task in statistics. We discuss this problem for manifolds of low rank matrices. These represent mixtures of independent distributions of two discrete random variables. This non-convex optimization problems leads to some beautiful geometry, topology, and combinatorics. We explain how numerical algebraic geometry is used to find the global maximum of the likelihood function, and we present a remarkable duality theorem due to Draisma and Rodriguez.

Prof. Gerhard Rosenberger

 

Universität Hamburg
Thursday, April 25th, 2013
17:15 – CM4

The Surface Group Conjecture and Embeddings of Surface Groups into Doubles of Free Groups

The surface group conjecture as originally proposed in the Kourovka notebook by Melnikow was the following problem. Surface Group Conjecture. Suppose that G is a residually finite non-free, non-cyclic one-relator group. Then G is a surface group. In this form the conjecture is false. The Baumslag-Solitar groups BS(1; n) = < a; b; a-1ba = bn >, n 2 Znf0g, are residually finite and satisfy Melnikov’s question. We then have the following modified conjecture. Surface Group Conjecture A. Suppose that G is a residually finite non- free, non-cyclic one-relator group such that every subgroup of finite index is again a one-relator group. Then G is a surface group or a Baumslag-Solitar group BS(1; n) for some integer n. We discuss recent results on the surface group conjecture A. It turned out that another property of surface groups is important to handle the conjecture. A group G has property IF if every subgroup of infinite index is a free group. Related to the surface group conjecture is a conjecture of Gromov which states that a one-ended word-hyperbolic group G must contain a subgroup that is isomorphic to a word-hyperbolic surface group. This is a very difficult question, and we restrict ourselves to the special case that G is a double G = F1 * u=v F2 of two free groups F1; F2. For this case we discuss some recent results on Gromov’s conjecture.


Prof. Claire Voisin

 

Ecole Polytechnique
Thursday, March 21st, 2013
17:15 – CM4

Integral Hodge Classes and Birational Invariants

The Hodge conjecture characterizes Betti cohomology classes on a complex projective manifold, which are combinations with rational coefficients of classes of algebraic subvarieties. Thanks to Atiyah-Hirzebruch’s work, one knows that the corresponding statement with integral coefficients is wrong. I will describe further counterexamples with integral coefficients, some leading to interesting birational invariants. In particular, the defect of the Hodge conjecture with integral coefficients for codimension 2 cycles is related to degree 3 unramified cohomology (joint work with Colliot-Thélène)

Prof. Desmond J. Higham

 

University of Strathclyde
Thursday, December 6th, 2012
17:15 – MA11

Twitter’s Big Hitters

Online human interactions take place within a dynamic hierarchy, where social influence is determined by qualities such as status, eloquence, trustworthiness, authority and persuasiveness. In this work, we consider topic-based Twitter interaction networks, and address the task of identifying influential players. Our motivation is the strong desire of many commerical entities to increase their social media presence by engaging positively with pivotal bloggers and Tweeters. We define the concept of an active node network subsequence, which provides a time-dependent summary of relevant Twitter activity. We then describe the use of new centrality measures, which apply basic matrix computations to sequences of adjacency matrices. We benchmark the computational results against independent feedback from social media experts working in commercial settings. This is joint work with Fiona Ainley, Peter Grindrod, Peter Laflin, Alex Mantzaris and Amanda Otley.


Prof. Mihalis C. Dafermos

 

University of Cambridge
Thursday, November 29th, 2012
17:15 – MA11

The black hole stability problem in general relativity

Black holes are one of the most celebrated predictions of general relativity. Our only real intuition for these objects stems from the remarkable properties of a family of explicit solutions of the Einstein equations, the so-called Schwarzschild and Kerr spacetimes. The question of dynamical stability of these spacetimes remains completely open. Considerable progress has been made in recent years, however, in understanding at least the linear aspects of the stability problem, and this is currently a very active area at the interface of hyperbolic pde, differential geometry and physics. This talk will give an introductory account of black holes in general relativity and review the status of our current mathematical understanding of the stability problem.


Prof. Richard E. Schwartz (SMS Public Lecture)

 

Brown University and University of Oxford
Friday, October 19th, 2012
18:15 – ELA 2

Playing Billiards on the Outside of the Table, and Other Games


Prof. David Donoho (Joint Mathematics, I&C and STI Colloquium)

 

Stanford University
Friday, October 5th, 2012
12:15 – Rolex Learning Centre Forum

Compressed Sensing: Examples, Prehistory, and Predictions

From 2004 to today, the research topic “Compressed Sensing” (CS) became popular in applied mathematics, signal processing, and information theory, and was applied to fields as distant as computational biology and astronomical image processing. Some early papers have gained thousands of citations. Part of the attraction is paradox: CS claims to correctly solve systems of equations with fewer equations than unknowns. One success story for CS comes in pediatric magnetic resonance imaging, where blind trials published in a flagship medical journal by Vansanawala, Lustig et al. gave a 6X MRI speedup while maintaining diagnostic quality images. Concretely, children needed to sit still in an MRI machine for about 1 minute rather than 8 minutes. The prehistory of CS goes back on a metaphoric level to coin-balance weighing puzzles known for millennia and more specifically to convex geometry known for a hundred years, and continues throughout the last century in several very different fields of research. Part of the spectacular recent interest, is that several fields, from information theory to high-dimensional geometry, are convinced that they saw the key ideas first, and that they know the best way to think about it. This talk will review success stories, precursors, and four modern ways of understanding the problem, from four different disciplines.