Homotopy Theory, K-theory, and Topological Data Analysis
a conference in honor of Rick Jardine
Description
This workshop will focus on algebraic topology, specifically in algebraic 𝐾-theory, motivic homotopy theory, allied areas of category theory, and topological data analysis. The unifying theme is one of bringing order
out of chaos. Given some complex mathematical object, such as vector bundles on an algebraic variety with bad geometry or some large and noisy dataset, how do we recover meaningful and informative numerical or
algebraic invariants? Algebraic 𝐾-theory began as a way to import ideas from classical algebraic topology into algebraic geometry, but it was quickly realized that it had much wider applicability; sixty years later this field is
still changing and growing. For example, it was instrumental in the very recent and highly celebrated solution of the Telescope Conjecture. The search for good foundations quickly led to questions and answers in category
theory, which in turn had very concrete applications in data analysis. The aim of the workshop is to survey this range of ideas, explore where we are, hear about what is new, and take a look at the future.
Here is a short survey of the four interconnected themes:
1.) Algebraic 𝐾-theory and related invariants.
This is an area with a rich history, beginning around 1960 with attempts of Grothendieck and others to import techniques from the topological 𝐾-theory of vector bundles into algebraic geometry. With the work of Quillen, Segal, and Waldhausen in the 1970s, we realized that almost any category with a good way to take sums had a 𝐾-theory, and thus the tools and ideas had applicability not just in algebraic geometry, but in stable homotopy theory, and even in differentiable topology. This was good news, but it was also bad news: 𝐾-theory was at once a vital source of information and extremely difficult to compute. The search for computational techniques has gone in several directions, leading to connections with arithmetic geometry, especially étale cohomology, and the development of trace methods, which connect 𝐾-theory with much more tractable invariants such as Hochschild and cyclic homology. This has led most recently to the computation of the 𝐾-theory of rings once thought unapproachable (such as rings with nilpotents), to new applications in the geometry of scissors congruences, as in the work of Zakharevich and her coauthors, and in chromatic homotopy theory.
2.) Motivic and A1-homotopy theory.
By the middle of the 1980s, Suslin had told us the 𝐾-theory of algebraically closed fields, at least if we inverted the characteristic of the field. The question then was how to leverage that information into more global calculations. One way to phrase this question is as follows. Algebraically closed fields are the closed points in the étale topology of a scheme; is there now some local-to-global process for computing the 𝐾-theory of the entire scheme? The answer has two parts. The first, due to Jardine, was a homotopy theory of pre-sheaves of spectra which could be used to write down the relevant computational machine. Then, with Morel and Voevodksy’s A1-homotopy theory, we gained a homotopy theory of schemes themselves, similar to the classical homotopy theory of spaces. This unlocked computations and led to Voevodsky’s solution of a number of the celebrated conjectures, and a Fields Medal. But it did more than that: it introduced an entire new area of study, motivic homotopy theory which, remarkably, served to interpolate between classical homotopy theory and much more algebraic—and hence much more computable—areas of homological algebra. As we began to explore this area, we came to realize that if some of the standard results from classical homotopy theory could be lifted to the larger context, then we could solve some long-standing problems in algebraic geometry, even in complex geometry. There have been significant advances with this program by Asok, Hopkins, and Wickelgren, among others, but there is certainly more to do.
3.) The homotopy theory of homotopy theory.
Since 𝐾-theory is an invariant with such wide applicability, there is the question of how to keep control of both the input and the output. Early on, Thomason realized that Segal’s version of 𝐾-theory should give an equivalence between the category of small categories and the category of connective spectra, assuming one could come up with the right theory of equivalences. The output side had a very standard solution using Quillen’s ideas on closed model categories, but the homotopy theory of categories turned out to be more open-ended. Quillen himself had known there was a notion of a “homotopy theory of homotopy theories” but questions arising from 𝐾-theory made the need for a more complete theory evident. This led us in several directions. One, as in the work of Rezk, Bergner, and Riehl, led to a mature theory of the homotopy theory of categories. Another direction had new implications for 𝐾-theory. For example, Tabuada and Barwick and their collaborators have shown that 𝐾-theory has a very explicit universal property. More concretely, Elmendorff-Mandell and others refined the various 𝐾-theory constructions in various ways, effectively answering the question of what kind a multiplicative structures we can expect on a 𝐾-theory spectrum.
4.) Topological Data Analysis.
Topological Data Analysis (TDA) is a rapidly growing field that uses ideas and methods from algebraic topology and algebraic geometry to extract meaningful structural information from complex and high-dimensional data. The first aim is to identify global topological shapes, connectivity, and multiscale patterns. Unlike traditional statistical and machine learning methods that focus on local or low-dimensional features, the methods of TDA are robust, in the sense that results are unaffected by small perturbations and data noise. For example, persistent homology, a central tool in TDA, detects topological features, such as connected components, loops, and voids, that persist across multiple scales, making it particularly effective for understanding noisy, high-dimensional datasets where conventional methods struggle. Importantly for this workshop, recent TDA methods have drawn from deeper areas of homotopy theory and algebraic geometry. These newer techniques leverage equivariant homotopy theory and derived algebraic geometry to study finer invariants that capture symmetries and higher-order structures in the data. Crucially, algebraic geometry plays a growing role, as methods from sheaf theory and the study of algebraic varieties have been applied to model and analyze structured datasets, including those arising in sensor networks, representation learning, and data fusion. For instance, recent work on categorified persistence and stratified spaces extends the reach of TDA beyond classical persistence diagrams, allowing researchers to study richer geometric and algebraic structures hidden within data.