The Topology and Complexity of Deep Computations (Part II)
The goal of this talk is to show how the asymptotic limits of computational models, ranging from deep neural networks to numerical approximation algorithms, can be rigorously classified using the techniques from Cp-theory.
Specifically, this two-part presentation aims to demonstrate that the topological closure of certain families of computations forms a Rosenthal compactum, the structure of which dictates the learnability and stability of the underlying algorithms.
Part II focuses on applying the Bourgain-Fremlin-Talagrand Theorem to obtain an equivalent formulation of "PAC-learnability" in the topological context. We will exhibit some computational examples witnessing different levels of learnability identified by topological methods. We will also explore a measure-theoretic characterization of NIP and its consequences in this framework for deep computations.
This is joint work with Eduardo Dueñez, José Iovino, Tonatiuh Matos-Wiederhold, and Franklin D. Tall.

