Please see the schedule below for the time and place of each talk.
Some seminars will take place on the Lausanne campus and others at Campus Biotech.
Program
Date and place  Title  Speaker 

20.10.2016 15h B1.02 Videoconference Room Campus Biotech 
Comparing sparse brain activity networks  Ben Cassidy Columbia University 
28.10.2016 15h B1.06 Lakeside Campus Biotech 
CANCELLED  Heather Harrington Oxford University 
10.11.2016 11h MA 31 
Statistical shape analysis using the persistent homology transform  Kate Turner EPFL 
24.11.2016 10h15 MA 31

Topological analysis in information spaces  Hubert Wagner IST Vienna 
21.02.2017 14h15 CM 09 
Universality of the homotopy interleaving distance: towards an “approximate homotopy theory” foundation for topological data analysis  Mike Lesnick Princeton 
28.02.2017 14h15 GC A1 416 
Generalizations of the Rips filtration for quasimetric spaces with corresponding stability results  Kate Turner EFPL 
03.03.2017 11h B1.06 Lakeside Campus Biotech 
An exposition of memoryaugmented artificial neural networks  William Mycroft Sheffield 
07.03.2017 14h15 MA 10 
Approximate interleaving between persistence sets of ends  Stefania Ebli IST 
08.03.2017 15h B1.06 Lakeside Campus Biotech 
Factorization of cubical areas  Nicolas Ninin CEA/Saclay 
10.03.2017 15h B1.06 Lakeside Campus Biotech 
Fat graphs, surfaces, and their applications  Daniela Egas Berlin 
20.03.2017 10h15 MA 31 
Counting functions on the interval  Justin Curry Duke 
21.03.2017 14h15 GC A1 416 
Computational challenges we face with multiparameter persistence  Wojciech Chacholski KTH 
28.03.2017 14h15 GC A1 416 
Quantifying similarity of poregeometry in nanoporous materials  Senja Barthel EPFL 
25.04.2017 14h15 C0 121 
A theoretical framework for the analysis of Mapper  Mathieu Carrière INRIA/Saclay 
30.05.2017 14h15 MA 10 
Reconnection, vortex knots and the fourth dimension  Louis Kauffman University of IllinoisChicago 
19.06.2017 14h15 CM 012 
Combinatorics, geometry and topology of neural codes  Dev Sinha Oregon 
21.06.2017 11h Campus Biotech, B1.02_Videoconference 
Canonical forms and persistence modules: toward combinatorial foundations in TDA  Greg Henselman University of Pennsylvania 
07.07.2017 11h Campus Biotech, B1.02_Videoconference 
Structure and evolution of topological brain scaffolds  Giovanni Petri ISI Foundation 
Abstracts
Cassidy: Functional magnetic resonance imaging of resting state brain activity has become a mainstay of modern neuroscience research. However, there are problems with existing methods for identifying, characterizing and comparing networks obtained from fMRI data, leading to many conflicting results in neuroimaging research.
Harrington: Persistent homology (PH) is a technique in topological data analysis that allows one to examine features in data across multiple scales in a robust and mathematically principled manner, and it is being applied to an increasingly diverse set of applications. We investigate applications of PH to dynamics and networks, focusing on two settings: dynamics {\em on} a network and dynamics {\em of} a network. We analyze a contagion spreading on a network using persistent homology. Next we investigate a network that changes in time and show that persistent homology may be useful for distinguishing temporal distributions and a high level summary of temporal structure. We discuss how to extend each application to the multiple parameter setting. Together, these two investigations illustrate that persistent homology can be very illuminating in the study of networks and their applications.
Turner: In statistical shape analysis we want methods to quantify how different two shapes are. In this talk I will outline a method using the topological tool of persistent homology. I will define the Persistent Homology Transform, which considers a family of persistence diagrams constructed from height functions in different directions. This transform is injective, stable, and can effectively describe geometrical features without the use of landmarks. The talk will end with a variety of example applications.
Wagner: Understanding high dimensional data remains a challenging problem. Computational topology, in an effort dubbed Topological Data Analysis (TDA), promises to simplify, characterize and compare such data. However, TDA focuses on Euclidean spaces, while many types of highdimensional data naturally live in nonEuclidean ones. Spaces derived from text, speech, image… data are best characterized by nonmetric dissimilarities, many of which are inspired by informationtheoretical concepts. Such spaces will be called information spaces.
I will present the theoretical foundations of topological analysis in information spaces. First, intuition behind basic computational topology methods is given. Then, various dissimilarity measures are defined along with information theoretical and geometric interpretation. Finally, I will show that the framework of TDA can be extended to information spaces and discuss the implications.
No previous knowledge about (computational) topology or information theory is required. This is joint work with Herbert Edelsbrunner and Ziga Virk. We look for interesting, practical applications for the above!
Lesnick: We introduce and study homotopy interleavings between filtered topological spaces. These are homotopyinvariant analogues of interleavings, objects commonly used in topological data analysis to articulate stability and inference theorems. Intuitively, whereas a strict interleaving between filtered spaces X and Y certifies that X and Y are approximately isomorphic, a homotopy interleaving between X and Y certifies that X and Y are approximately weakly equivalent.
The main results of this paper are that homotopy interleavings induce an extended pseudometric d_HI on filtered spaces, and that this is the universal pseudometric satisfying natural stability and homotopy invariance axioms. To motivate these axioms, we also observe that d_HI (or more generally, any pseudometric satisfying these two axioms and an additional “homology bounding” axiom) can be used to formulate lifts of fundamental TDA theorems from the algebraic (homological) level to the level of filtered spaces.
This is joint work with Andrew Blumberg.
Turner: Rips filtrations over a finite metric space and their corresponding persistent homology are prominent methods in Topological Data Analysis to summarize the “shape” of data. For finite metric space X and distance r the traditional Rips complex with parameter r is the flag complex whose vertices are the points in X and whose edges are {[x,y]: d(x,y)≤ r}. From considering how the homology of these complexes evolves we can create persistence modules (and their associated barcodes and persistence diagrams). Crucial to their use is the stability result that says if X and Y are finite metric space, then the bottleneck distance between persistence modules constructed by the Rips filtration is bounded by 2d_{GH}(X,Y) (where d_{GH} is the GromovHausdorff distance). Using the asymmetry of the distance function we define four different constructions analogous to the persistent homology of the Rips filtration and show they also are stable with respect to the GromovHausdorff distance. These different constructions involve orderedtuple homology, symmetric functions of the distance function, strongly connected components, and poset topology.
Mycroft: Biologically inspired machine learning algorithms known as artificial neural networks have been around since the 1950s, but have rapidly gained popularity over the last decade due to increases in computational power and the sheer volume of data available. The simplest forms of these, feedforward neural networks, have two major shortcomings: they require input of fixed length and handle each input independently. As such these algorithms have no “memory” and are unsuitable for sequence learning problems. Simple variants, recurrent neural networks, equip the neural network with a basic internal memory and have shown success in handling such problems. However, this memory is inherently shortlived and these networks struggle to learn longer term dependencies. A very recent biologically inspired variant, neural Turing machines, augment the network with an external memory and show potential to address such problems.
Ebli: The notion of ends of a topological space was introduced in 1930 by Hans Freudenthal. Intuitively, the ends represent the directions in which a space extends to infinity and are formally defined in terms of covers of the space by nested sequences of compact sets. More precisely, the idea is to assign to every cover of the space by a nested sequence of compact sets an inverse system, called the persistence set of ends, and define the set of ends as its inverse limit.
The goal of this presentation is to show that there exists an approximate interleaving between the persistence sets of ends of quasiisometric metric spaces. A straightforward corollary will be that the set of ends is invariant under quasiisometries. The content of this presentation is the result of my 2 months internship at the IST Austria.
Ninin: Concurrent programs are known for their difficulties to be analyzed due to the exponential explosion of executions traces. Being able to decompose such programs into independents parts prior to the analysis is thus highly helpful.
We use directed algebraic topology to modelize concurrent programs, where an execution trace is a path on a topological space ( R^n when there is n trheads). We call such spaces cubical areas. In this setting the decomposition of a program corresponds to the factorization of his geometric model. Some categories can be defined from those spaces such as the categories of components that can also be factorized.
In the first part we will present a simple factorization algorithm with very efficient heuristics. In a second part we’ll talk about factorizing the categories and how to compare the different factorization between the space and the categories.
Egas: Fat graphs are a combinatorial model invented by Harer and Penner to study the moduli space of Riemann surfaces. In this talk, I will first describe the structure of these objects from the perspective of combinatorial topology and their relations to surfaces and their moduli. Thereafter, I will overview how these moduli spaces techniques have been applied in biology in the study of proteins and RNA structures and how they have potential applications to mobile sensor networks.
Curry: Topology offers a set of descriptors—trees, persistence diagrams, and sheaves—for the analysis of data where shape, broadly speaking, is important. I will present a new technique called “chiral merge trees” especially suited to the study of time series. Counting the number of chiral merge trees that realize a given persistence diagram refines Arnold’s Calculus of Snakes and has a suggestive entropic interpretation. Since the space of trees is CAT(0), the existence of unique Fréchet means overcomes certain statistical challenges in persistence. Finally, I will discuss how constructible cosheaves provide a unifying data structure for the study of multivariate data, where questions of numerical approximation and convergence are leading to the study of analysis on these categorically defined structures.
Chacholski: I will start with presenting what I believe persistence is. Traditionally, persistence is associated with barcoding. This narrow view however is a serious obstruction to extending the notion of persistence to a multiparameter setting. My aim is to propose a new way of looking at persistence and illustrate how it can be used to define stable invariants of multiparameter data systems. I will then explain computational differences between 1 and multiparameter situations
Barthel: The material properties of nanoporous materials like zeolites and metalorganicframeworks strongly depend on their pore systems. We present a persistent homological descriptor to describe and compare the shape of pores which allows us to screen the databases for materials whose pore shapes are similar to materials with good properties with respect to a given application. We find that the known zeolites best for methane storage belong to several distinct classes of pore shapes that each require different optimization strategies. This is in contrast to the common belief that the best materials for methane storage all share a similar heat of adsorption.
Carrière: Mapper is probably the most widely used TDA (Topological Data Analysis) tool in the applied sciences and industry. Its main application is in exploratory analysis, where it provides novel data representations that allow for a higherlevel understanding of the geometric structures underlying the data. The output of Mapper takes the form of a graph, whose vertices represent homogeneous subpopulations of the data, and whose edges represent certain types of proximity relations. Nevertheless, the inherent instability of the output and the difficult parameter tuning make the method rather difficult to use in practice. This talk will focus on the study of the structural properties of the graphs produced by Mapper, together with their partial stability properties, with a view towards the design of new tools to help users set up the parameters and interpret the outputs.
Kauffman: Vortex knots tend to unravel into collections of unlinked circles by writhepreserving reconnections. We can model this unravelling by examining the world line of the knot, viewing each reconnection as a saddle point transition. The world line is then seen as an oriented cobordism of the knot to a disjoint collection of circles. Cap each circle with a disk (for the mathematics) and the world line becomes an oriented surface in fourspace whose genus can be no more than onehalf the number of recombinations. Now turn to knot theory between dimensions three and four and find that this genus can be no less than onehalf the Murasugi signature of the knot. Thus the number of recombinations needed to unravel a vortex knot K is greater than or equal to the signature of the knot K. This talk will review the backgrounds that make the above description intelligible and we will illustrate with video of vortex knots and discuss other bounds related to the Rasmussen invariant. This talk is joint work with William Irvine.
Sinha: Given a collection of neurons, consider all of the subsets which are observed to fire at the same time. One can view this as a simplicial complex with missing facets (for example, if neurons 1 and 2 fire together and neuron 1 is never observed firing on its own). We review recent work on such codes by Curto, Giusti, Itskov and their collaborators, focusing on aspects of when such codes are consistent with a “place field” model, and the interplay between such codes and the neural network architecture which can realize them.
Henselman: Topological data analysis (TDA) is an emergent field of mathematical data science specializing in complex, noisy, and highdimensional data. While the elements of modern TDA have existed since the mid1980’s, applications over the past decade have seen a dramatic increase in systems analysis, engineering, medicine, and the sciences. Two of the primary challenges in this field regard modeling and computation: what do topological features mean, and how should one compute them? Remarkably, these questions remain largely open for some of the simplest structures considered in TDA — homological persistence modules and their barcode representatives — after a decade of intensive study. This talk will present a new approach to (zigzag) homological persistence, informed by recent work on the combinatorial foundations of matrix canonical forms. In relation to standard algebraic techniques, this treatment is strictly more abstract and quite often more practical, bearing directly on the functional interpretation of cycle representatives and their behavior under common functorial operations. As an application, we discuss a new algorithm to compute barcodes and generators, recently implemented in the Eirene library for computational persistence, and concomitant advances in speed and memory efficiency, up to several orders of magnitude over the standard algorithm.
Petri: Topology is one of the oldest and more relevant branches of mathematics, and it has provided an expressive and affordable language which is progressively pervading many areas of biology, computer science and physics. I will illustrate the type of novel insights that algebraic topological tools are providing for the study of the brain at the genetic, structural and functional levels. Using brain gene expression data, I first will construct a topological genetic skeleton, together with an appropriate simplicial configuration model, pointing to the differences in structure and function of different genetic pathways within the brain. Then, by comparing the homological features of structural and functional brain networks across a large age span, I will highlight the presence of dynamically coordinated compensation mechanisms, suggesting that functional topology is conserved over the depleting structural substrate, and test this conjecture on data coming from a set of different altered brain states (LSD, psylocybin, sleep).