B1, 6th floor
|Simplicial complexes for analysis of neural data
University of Pennsylvania
B1, 6th floor
|Using persistent homology to reveal hidden information in neural data
|Knot theory and data distribution
|The topology of neural networks and their activation patterns
|Trees of nuclei and bounds on the number of triangulations of S3
Jozef Stefan Institute
|Approximating persistent homology in Euclidean space through collapses
|Topological data analysis for materials science
Australian National University
|Topology in the furnace: TDA as a diagnostics tool for process control systems
|Algebraic stability of zigzag persistence modules
|Magnus Bakke Botnan
Giusti: Graphs have proven to be an exceptional data structure through which to address a broad range problems in neuroscience. However, they are intrinsically limited to the study of dyadic relationships, as represented by an edge (or its absence) between two population elements. In the brain, it is often clear that fundamental functional units of interest involve large groups of basic units, which suggests that graph models are insufficient for their study. Simplicial complexes offer a natural way to address this concern, with the added benefit of providing a bridge for the application of powerful topological tools. A central difficulty in using simplicial methods, however, is the construction from observations of complexes whose topological or combinatorial structure tells us something useful about the underlying neural system. Here, we discuss a pair of complexes, the order complex and the coincidence complex, that have proven effective for understanding neural data across various modalities, along with a discussion of how to measure interesting structure.
Spreemann: Mammalian navigation is aided by place cells, which are neurons that fire preferentially when the animal is in certain regions of space. It is known that the firing activities of these and related neurons are not governed solely by spatial position, but also by head direction, theta wave phase, sensory stimuli, etc., and probably also by further unknown influences. Such covariates are thought of as being reflected in the animal’s “state space”, and knowledge of its topological properties can reveal hidden information about a priori unknown covariates.
We propose a method wherein an approximation of such a state space is built from spike train recordings of neurons. Persistent homology is then used to reveal properties of the space. Through an inference process, we remove the contributions of known covariates to the spike trains, and thus to the reconstructed stace space. After all known covariates have been accounted for, persistent homology reveals properties of any potential remaining unknown ones.
Eppendahl: Back in the nineties, Bill Roscoe gave a scalable algorithm for passing updates around a distributed database and proved that the algorithm maintains database consistency. The proof goes via an algebraic structure that obeys the rack law used in knot theory. We observe that when the paths of updates are drawn in space-time the connection with knot theory stands out clearly. This leads to a simple topological proof of the original algorithm and to extensions of the algorithm. The applications to concurrency actually suggest a weaker theory of demi-racks, which may be related to directed homotopies (although I’m not sure about this). Both the knot theory and database model are very elementary, so computer scientists should be able to follow the maths and mathematicians, the computer science. The interest lies in the simplicity of the topological picture. (Work carried out at the Institute of Cybernetics, Tallinn.)
Younan: In 1962, W. Tutte showed that the number of 2d triangulations (simplicial decompositions) of the sphere S2 with t triangles grows exponentially with t. The equivalent question in 3d remains open. We introduce a notion of nucleus (a 3d triangulation with boundary such that all nodes are external and each internal face has at most 1 external edge). A nucleus is typically a triangulation with knots along its internal edges.We show that every triangulation can be built from trees of nuclei. This leads to a new reformulation of the above question: We show that if the number of rooted nuclei with t tetrahedra grows exponentially with t, then so does the number of all triangulations of S3.
Skraba: Persistent homology is a central tool in topological data analysis. It describes various structures such as components, holes, voids, etc. via a barcode (or a persistence diagram), with longer bars representing “real” structure and shorter bars representing “noise.” A natural question is how long are the bars we can expect to see from data with no structure, i.e. noise. In this talk, I will introduce some recent results regarding the persistent homology of random processes, specifically, a homogeneous Poisson process. In particular, I will describe how we obtain upper and lower bounds on what is the longest bar we expect to see if our input is “noise.”
Spreemann: The inclusive nature of the widely used Čech filtration can, for computational reasons, preclude its use in certain situations. Imagine for example points sampled nicely from a circle, together with a “lump” of points collected very densely somewhere. While the lump contributes nothing of interest to homology, its presence will cause a complete subcomplex on many vertices to form at a very early stage in the filtration, and remain with the persistence computation at all later scales. We propose a method for coarsening the covering sets of the Čech complex, which yields a sequence of nerves connected by simplicial maps. While the coarsened covers are no longer good, we show that their associated persistence module is approximate to that of ordinary Čech persistence. Joint work with Magnus Botnan.
Robins: Topological data analysis provides mathematically rigorous computational tools for quantifying connectivity in geometric data sets. The primary mathematical theory is called persistent homology, it measures topological quantities such as components, loops and higher-dimensional cycles as a function of a geometric parameter. A central lesson from TDA is that topological structure in data can only be robustly quantified by studying how it varies over a sequence of length-scales.
Recent applications of persistent homology in materials science include:
• the characterisation of local configurations of spheres in bead packings, and atomic arrangements in fluids, where it enables a clearer picture of phase transitions;
• analysis of Rayleigh-Benard convection patterns to detect and quantify departures from the Boussinesq approximation;
• generating topologically consistent grain partitions and pore networks from x-ray CT images of porous and granular materials to enable better modelling of fluid transport.
Further applications in engineering include coverage in sensor networks, robot motion planning, and image processing. This talk will focus on the interpretation of persistence diagrams in the context of porous and granular materials in order to demonstrate exactly what sort of information can be obtained from TDA and how it can lead to new physical insights.
Vejdemo-Johansson: Steel smelting is a high-volume, high-throughput industry, where the smallest performance gains translate into large dividends. Model construction to predict conditions inside the furnace is a centrally important part of process control. Machine learning and statistical methods have been shown to improve on purely metallurgical models, but in either case, the failure modes of the model are poorly understood, and tools for analyzing them not well developed.
We work in collaboration with Outukumppu Stainless with their electric-arc scrap furnace, to analyze and improve their temperature prediction models. Temperature prediction in particular is an important model to improve: reference measurements can be done by inserting probes, but these are costly and if measuring too early, more probes will be needed — measuring too late risks overheating the steel and spoiling the entire batch.
Based on work and ideas from Anthony Bak and Ayasdi, and in collaboration with Ayasdi, we are studying the use of the Mapper algorithm to construct intrinsic models of the fibres (preimages) of failed predictions. These models help classify different modes of failure for the models, and direct attention for improvement or for learning compensation transforms to improve precision of temperature detection.
In this talk, I will describe the approach we take for modeling and classifying failure modes, and give some examples from our ongoing study of the steel smelting data. The talk will assume no previous knowledge of topological data analysis, and will explain Mapper completely and accessibly.
Botnan: The stability theorem for persistent homology is a central result in topological data analysis. While the original formulation of the result concerns the persistence barcodes of R-valued functions, the result was later cast in a more general algebraic form, in the language of persistence modules and interleavings. In this talk, we discuss an analogue of this algebraic stability theorem for zigzag persistence modules. To do so, we functorially extend each zigzag persistence module to a two-dimensional persistence module, and establish an algebraic stability theorem for these extensions. If time permits we discuss how this idea can be extended to define interleavings of persistence modules defined over any poset.