The seminar will take place both in person and online until further notice. If you have any questions please contact Adélie Garin , Celia Hacker or Kathryn Hess.

### Program

### Abstracts

**A. Picot: ** While real persistence modules decomposition is well understood, little can be found in the literature about persistence modules morphisms.

First, we define the matrix of a morphism which depends on the chosen basis of the persistence modules.

There is an equivalence between persistence modules morphisms and persistence modules on Rx{1,2}. This allows to define the decomposition of a morphism thanks to the Crawley-Boevey decomposition theorem. In this talk, we give examples of indecomposable morphisms, which depend strongly on the combinatorics of the barcodes of persistence modules and the matrix. We finally give the precise statement of a general decomposition theorem of a monomorphism.

**E. Roldan: ** We study a natural model of random 2-dimensional cubical complexes which are subcomplexes of an n-dimensional cube, and where every possible square (2-face) is included independently with probability p. Our main result exhibits a sharp threshold p=1/2 for homology vanishing as the dimension n goes to infinity. This is a 2-dimensional analogue of the Burtin and Erdős-Spencer theorems characterizing the connectivity threshold for random graphs on the 1-skeleton of the n-dimensional cube. Our main result can also be seen as a cubical counterpart to the Linial-Meshulam theorem for random 2-dimensional simplicial complexes. However, the models exhibit strikingly different behaviors. We show that if p > 1 – sqrt(1/2) (approx 0.2929), then with high probability the fundamental group is a free group with one generator for every maximal 1-dimensional face. As a corollary, homology vanishing and simple connectivity have the same threshold. This is joint work with Matthew Kahle and Elliot Paquette.

**A. Garin: ** Methods of topological data analysis have been successfully applied in a wide range of fields to provide useful summaries of the structure of complex data sets in terms of topological descriptors, such as persistence diagrams, or barcodes. While there are many powerful techniques for computing topological descriptors, the inverse problem, i.e., recovering the input data from topological descriptors, has proved to be challenging. In this talk, I will consider specifically the inverse problem from barcodes to merge trees. I will describe a connection between the space of barcodes and symmetric groups, and show how to use it to study distributions of neurons modelled as trees, creating a bridge between the field of permutation statistics and TDA. I will then extend this symmetric group connection into a new way to coordinatize the space of barcodes, opening the door to a statistical and probabilistic study of the space of barcodes using a geometric group theory point of view.

This is joint work with B. Brück, J. Curry, J. DeSha, K. Hess, L. Kanari and B. Mallery.

**B. Rieck: ** Diffusion is a general umbrella term for information propagation schemes. In this talk, I will briefly summarise our recent work on leveraging such schemes on structured data sets, i.e. graphs, and unstructured ones, i.e. point clouds. I will demonstrate how topology-driven diffusion schemes can improve classification performance and help in creating new hierarchical summaries of complex data sets.

**M. Pegoraro: ** In this seminar we will deal with tree-shaped representations of sequences of homology groups in dimension 0 arising from filtrations of simplicial complexes. Such objects arise naturally in different scientific fields and fit very well into the framework of Topological Data Analysis. The information they carry makes them an ideal alternative to persistence diagrams (PDs) in some situations in which PDs cannot be meaningfully used. Moreover, this information can be enriched in many fruitful ways, obtaining new tools to analize data.

We also introduce a way to produce metric spaces for such objects providing frameworks with computational properties which are good enough for a good range of applications.

**D. Lee: ** A common approach for describing classes of functions and probability measures on a topological space (the input space) is to construct a suitable map into a vector space, where linear methods can be applied to address both problems. When the input space is a space of paths, the path signature is such a map, which has drawn substantial attention from the stochastic analysis and machine learning communities. In this talk, we develop a generalized signature map when the input space is a space of maps from higher dimensional cubical domains and show that it extends many of the desirable algebraic and analytic properties of the path signature to higher dimensions. The key ingredient to our approach is topological; in particular, our starting point is a generalisation of K-T Chen’s path space cochain construction to the setting of cubical mapping spaces. This is joint work with Chad Giusti, Vidit Nanda, and Harald Oberhauser.

**A. Hickok: ** In this talk, I will discuss a new approach for using persistent homology to infer the homology of an unknown Riemannian manifold (M, g) from a point cloud sampled from an arbitrary smooth probability density function. Standard distance-based filtered complexes, such as the Čech complex, often have trouble distinguishing noise from features that are simply small. Moreover, the standard Čech complex may only be homotopy-equivalent to M for a very small range of filtration values. I address this problem by defining a family of “density-scaled filtered complexes” that includes a density-scaled Cech complex and a density-scaled Vietoris-Rips complex. The density-scaled Čech complex is homotopy-equivalent to M for filtration values in an interval whose starting point converges to 0 in probability as the number of points N → ∞ and whose ending point approaches infinity as N → ∞. The density-scaled filtered complexes also have the property that they are invariant under conformal transformations, such as scaling. I will also talk about my implementation of a filtered complex that approximates the density-scaled Vietoris–Rips complex. The implementation is stable (under conditions that are almost surely satisfied) and designed to handle outliers in the point cloud that do not lie on M. As applications, I use the implementation to identify clusters in a point cloud whose clusters have different densities, and I apply it to a time-delay embedding of the Lorenz dynamical system.

**J. Scott: ** We give a recipe for defining a Wasserstein-type metric on the objects of any abelian category that satisfies a Krull-Schmidt type condition. This is joint work with Peter Bubenik (U Florida) and Donald Stanley (U Regina).

**H. Adams: ** The Gromov-Hausdorff distance between two metric spaces is an important tool in geometry, but it is difficult to compute. For example, the Gromov-Hausdorff distance between unit spheres of different dimensions is unknown in nearly all cases. I will introduce recent work by Lim, Mémoli, and Smith that lower bounds the Gromov-Hausdorff distance between spheres using Borsuk-Ulam theorems. We improve these lower bounds by connecting this story to Vietoris-Rips complexes, providing new generalizations of Borsuk-Ulam. This is joint work in a polymath-style project with many people, most of whom are currently or formerly at Colorado State, Ohio State, Carnegie Mellon, or Freie Universität Berlin.

**C. Hirsch: ** This talk introduces weak and strong simplicial percolation as models for continuum percolation based on random simplicial complexes in Euclidean space. Weak simplicial percolation is defined through infinite sequences of k-simplices sharing a (k-1)-dimensional face. In contrast, strong simplicial demands the existence of an infinite k-surface, thereby generalizing the lattice notion of plaquette percolation. We discuss the sharp phase transition for weak simplicial percolation and derive several relationships between weak simplicial percolation, strong simplicial percolation, and classical vacant continuum percolation. We will also draw connections to a variety of topological models for percolation that have been proposed recently in the literature.

This talk is based on joint work with Daniel Valesin.

**V. Lebovici: ** Euler calculus techniques — integration of constructible functions with respect to the Euler characteristic — have led to important advances in topological data analysis. For instance, the (constructible) Radon transform has provided a positive answer to the following question: are two subsets of R^n with same persistent homology in all degrees and for all height filtrations equal?

In this talk, I will introduce integral transforms combining Lebesgue integration and Euler calculus for constructible functions. On the theoretical side, these objects enjoy invariance and regularity properties, while they appear in practice as efficiently computable vectorisations of weighted simplicial complexes in the form of multivariate continuous functions. Focusing on one example, the Euler-Fourier transform, I will show various examples illustrating that it is strictly more discriminating than its classical analogue. For persistence addicts, I will present how these transforms yield a generalization of Govc and Hepworth’s persistent magnitude to multi-parameter persistent modules.

**I. Yoon: **A central challenge in topological data analysis is the interpretation of barcodes. The classical approach is to build an explicit map to a well-understood model or a space and to leverage functoriality. However, we often lack such maps in real data. I will describe one possible way of addressing this issue that uses cross-system dissimilarity between the observations and the reference system. I will then share some preliminary results of using this method to study neural encoding and propagation on real & simulated data.

**F. Unger: **Recent works in Topological Data Analysis have analyzed biological neural networks by understanding them as directed graphs and analyzing their flag complex. The significance of their findings was always with respect to comparable Erdös-Renyi-Graphs (ER-Graphs), despite them lacking a large enough flag complex to support more complex topological structures to begin with. This raises the question: Are these topological findings just a byproduct of the large number of simplices? We propose a different null-model than ER-Graphs of comparable size and density: We require additionally a comparable number of simplices in its flag complex. In this talk we’ll present a first step towards that: Whilst also retaining the underlying undirected graph, we develop a Monte-Carlo-Markov-Chain-based sampling algorithm able to uniformly sample from our proposed null-model. As a first result, we present that the connectome of C.Elegans does not only give rise to a higher number of simplices when compared to comparable ER-graphs (already known), but additionally has more topological features not only compared to an ER-graph, but also compared to our (connectivity-restrained) null-model. This suggests significance and purpose of topological methods beyond simplex count analysis.