BlueleafBlueleaf
Mathematics
Back to issueMathematics

Two Decades in the Making

The Langlands Proof Arrives

14 min read18 min audio

Summary

Nine mathematicians, nearly 1,000 pages, five papers, and over two decades of sustained effort. In 2025, Dennis Gaitsgory's team published a proof of the *geometric Langlands conjecture*, one of the deepest structural claims in modern mathematics. The achievement capped a program that Gaitsgory had pursued since his graduate work in the 1990s. Most people cannot understand what was proved. That it took a generation to prove speaks to its difficulty. The year also brought a controversial claim on the Navier-Stokes equations, advances in computer-verified proofs, and persistent mysteries in number theory. Mathematics revealed itself as a human endeavor: proofs take decades, controversies persist, and the boundary between known and unknown shifts slowly, grudgingly. ## The Geometric Langlands Program The *Langlands program*, proposed by Robert Langlands in 1967, posits deep connections between number theory, *representation theory*, and geometry. The geometric Langlands conjecture transposes these ideas from number fields to *function fields* of algebraic curves, trading arithmetic for geometry while preserving the essential structure. To understand what was proved, start with an algebraic curve: a one-dimensional shape like a circle or something more intricate with multiple holes. A *principal bundle* attaches a geometric object (specified by a group $G$) to each point of the curve, a family of symmetry spaces varying smoothly over the surface. The conjecture concerns two ways of studying such bundles. One uses *D-modules*, which encode systems of differential equations, capturing dynamics. The other uses *local systems*, representations of the curve's fundamental group that record how loops wind around holes. These seem like different mathematical worlds: one about calculus, the other about topology. The geometric Langlands conjecture asserts they are the same. Every D-module has a corresponding local system, and vice versa. The dictionary is exact. Dennis Gaitsgory, who led the proof effort over more than two decades, received the 2025 Breakthrough Prize in Mathematics.[^1] The proof establishes a correspondence between two categories of objects on an algebraic curve, with D-modules on the moduli stack of principal bundles on one side and *quasi-coherent sheaves* on the moduli stack of local systems on the other. The correspondence can be expressed as an equivalence of *derived categories*: $$D\text{-mod}(\text{Bun}_G(X)) \simeq \text{QCoh}(\text{LocSys}_{\check{G}}(X))$$ Here $G$ is a *reductive group*, $\check{G}$ its *Langlands dual*, and $X$ an algebraic curve. Intuitively: two different mathematical languages for describing geometric symmetries turn out to be perfectly translatable. The proof constructs that dictionary. Constructing this dictionary required new infrastructure, including a theory of *singular support* that captures how D-modules degenerate. The result does not solve problems in number theory directly. That awaits the harder arithmetic case. But it validates the geometric approach and provides tools that may extend to Langlands's original number-theoretic vision. ## The Formalization Revolution The Langlands proof spans nearly a thousand pages across five papers. Such work traditionally requires years of expert scrutiny. A parallel development in 2025 offered a different path: what if machines could check proofs by reducing them to axiomatic foundations? *Computer proof assistants* like *Lean*, *Coq*, and *Isabelle* can verify mathematical proofs down to axiomatic foundations, and in 2025 they advanced from academic curiosities to practical tools. The *Mathlib*[^2] library for Lean 4 now contains formalizations of substantial portions of undergraduate and graduate mathematics. Key developments included the formalization of *Brownian motion*, the *Ionescu-Tulcea theorem* for Markov chains, and the *Mason-Stothers theorem* (a function field analogue of the abc conjecture). AI systems for theorem proving achieved new benchmarks. The *APOLLO* framework combines Lean with large language models to automatically repair proofs, achieving 84.9% accuracy on the miniF2F benchmark[^3] while keeping sampling budgets low. *Semantic search engines* like *LeanExplore* and *Lean Finder* help mathematicians navigate the expanding library of formalized mathematics. The combination of formal verification and AI assistance points toward a future where mathematical proofs are routinely machine-checked. Whether this changes practice fundamentally or merely adds a verification layer will depend on mathematicians, not machines. ## The ABC Conjecture Controversy Continues This future arrives unevenly. Some proofs can be formalized today; others resist. The most contentious controversy in modern mathematics concerns a proof that machines might one day resolve, if humans cannot agree first. The *abc conjecture*, formulated in 1985, relates the prime factors of three numbers satisfying $a + b = c$ to their magnitudes. The *radical* $\text{rad}(n)$ is the product of the distinct prime factors of $n$. For example, $\text{rad}(72) = \text{rad}(2^3 \cdot 3^2) = 2 \cdot 3 = 6$. The conjecture states that for any $\epsilon > 0$, there exist only finitely many triples $(a, b, c)$ of coprime positive integers with $a + b = c$ such that: $$c > \text{rad}(abc)^{1+\epsilon}$$ The bound captures the intuition that $a + b = c$ cannot happen too often when $a$, $b$, and $c$ are built from small prime factors. The conjecture implies many deep results, including *Fermat's Last Theorem* for sufficiently large exponents. Shinichi Mochizuki claimed a proof in 2012 using his *Inter-universal Teichmüller Theory* (IUT), a framework so novel and idiosyncratic that few mathematicians have been able to evaluate it. Mochizuki, working in near-isolation in Kyoto, has refused to travel or engage with critics in the customary ways. In 2018, Peter Scholze and Jakob Stix (Scholze being one of the most celebrated mathematicians of his generation) identified what they consider a fundamental gap. Mochizuki disputed their critique. The impasse continues. Kirti Joshi at the University of Arizona has developed an independent approach through *Arithmetic Teichmüller Spaces*.[^4] His papers, revised through February 2025, claim to establish the abc conjecture using more accessible mathematics that nonetheless incorporates key ideas from Mochizuki's program. Joshi argues his approach offers independent verification. Experts have not yet endorsed either claimed proof. Contention persists, and officially the abc conjecture remains open. ## The Navier-Stokes Claim The abc controversy remains unresolved. In 2025, a claim on another Millennium Prize problem emerged, one concerning fluid dynamics rather than number theory. One of the seven *Millennium Prize Problems*[^5] asks whether smooth solutions to the three-dimensional Navier-Stokes equations exist globally in time for arbitrary smooth initial data. The equations describe viscous fluid flow: $$\frac{\partial \mathbf{u}}{\partial t} + (\mathbf{u} \cdot \nabla)\mathbf{u} = -\nabla p + \nu \Delta \mathbf{u}$$ $$\nabla \cdot \mathbf{u} = 0$$ Here $\mathbf{u}$ is velocity, $p$ is pressure, and $\nu$ is viscosity. The first equation balances acceleration with pressure gradients and viscous diffusion. The second enforces incompressibility. Whether solutions can develop singularities from smooth initial conditions is unknown. James Glimm and Jarret Petrillo posted a preprint claiming to resolve the problem in the positive. Their approach constructs smooth solutions as the mean values of general *weak solutions*, characterized as entropy-production-minimizing solutions. The construction occurs in a finite periodic domain. The claim has not been verified by the mathematical community. Previous claimed proofs of Millennium Prize problems have been retracted or found lacking. Verification requires community scrutiny through seminars, workshops, and formal peer review, a process that typically takes years. Until then, the problem remains officially unsolved. ## Sunflowers in Combinatorics While millennium problems attract headlines, progress continued on questions first posed decades ago. Combinatorics, the mathematics of counting and arrangement, advanced on a conjecture from 1960. A *sunflower* is a collection of sets where every pair has the same intersection, called the *core*. The *petals* are what remain after removing the core. Erdős and Rado proved in 1960 that any collection of more than $k!(r-1)^k$ sets of size $k$ contains a sunflower with $r$ petals. The *sunflower conjecture* asks whether the bound can be improved to $c^k$ for some constant $c$ depending on $r$. Anup Rao published a comprehensive survey of sunflower theory and its applications. The paper presents a short elementary proof of the best known bounds for the *robust sunflower lemma*,[^6] which has found applications in circuit complexity, data structure lower bounds, and property testing. Recent progress, building on work by Alweiss, Lovett, Wu, and Zhang in 2019, has reduced the bound to approximately $(C \log k)^k$ for a constant $C$. This represents exponential improvement over the original Erdős-Rado bound but still falls short of the conjectured polynomial bound. The sunflower conjecture remains one of the most accessible yet stubborn problems in combinatorics. ## Prime Gaps The sunflower conjecture remains open despite recent progress. Problems in combinatorics often resist for decades, their apparent simplicity masking deep structure. Number theory hosts the oldest such problems. The distribution of prime numbers has fascinated mathematicians for millennia. The *twin prime conjecture*, that there are infinitely many pairs of primes differing by 2, has resisted proof for over a century. Progress on bounded gaps between primes accelerated dramatically after Yitang Zhang's 2013 breakthrough showing that infinitely many prime pairs differ by at most 70 million. The *Polymath project* reduced this to 246, and James Maynard (2022 Fields Medalist) independently achieved similar bounds. Non-rigorous probabilistic analyses suggest further improvements may be possible. One analysis using chaotic perturbations and random matrix theory suggests unconditional gaps of at most 180 and conditional gaps of at most 8 under partial forms of the *Elliott-Halberstam conjecture*.[^7] These remain heuristic suggestions rather than rigorous bounds, but they indicate the directions researchers are pursuing. The ultimate goal, proving that gaps of 2 exist infinitely often, remains out of reach. Current methods cannot distinguish pairs differing by 2 from pairs differing by 4. ## Random Matrices and Neural Networks But mathematics finds unexpected bridges between domains. Number theory connects to neural networks through *random matrix theory*, a framework developed for physics that now illuminates both. *RMT*, developed to understand nuclear physics in the 1950s, has found unexpected applications in machine learning. When neural network weights are initialized randomly, the *eigenvalue distributions* of weight matrices follow predictions from RMT. The connection goes deeper. During training, the eigenvalue spectrum of the *Hessian matrix*, the matrix of second derivatives of the loss function, influences optimization dynamics. Eigenvalue distributions that deviate from RMT predictions can indicate poor training or approaching singular points in the loss landscape. Researchers have used RMT to analyze the generalization properties of *overparameterized* neural networks, networks with more parameters than training examples. Classical statistics predicts such networks should overfit catastrophically, yet they generalize well. RMT provides tools for understanding this phenomenon through the lens of high-dimensional probability. Work here remains mathematically challenging because neural network weight matrices are not truly random after training. Developing rigorous theory for trained networks, rather than just initialized ones, is an active research frontier. ## Symplectic Geometry and Mirror Symmetry Random matrix theory connects to physics through statistical mechanics, and to machine learning through the geometry of high-dimensional optimization. Physics provides another bridge to pure mathematics. Symplectic geometry, the natural structure of classical mechanics, underlies one of the deepest conjectures connecting algebra and geometry. *Mirror symmetry*, arising from string theory, posits that certain pairs of geometrically distinct spaces (*Calabi-Yau manifolds*) yield equivalent physical theories. *Homological mirror symmetry*, proposed by Maxim Kontsevich in 1994, reformulates this as an equivalence between derived categories of *coherent sheaves* on one manifold and the *Fukaya category* (built from *Lagrangian submanifolds*) on its mirror. The conjecture has been proved in many cases but remains open in general. Progress in 2025 continued the interplay between algebraic and symplectic geometry, with new cases of homological mirror symmetry established and connections to other areas (including the geometric Langlands program) clarified. ## Topological Data Analysis Ideas from physics become theorems in mathematics, and theorems find applications no one predicted. *Topological data analysis* demonstrates this pattern: abstract algebraic topology, developed to study shapes, now helps scientists find structure in high-dimensional data. The central tool is *persistent homology*, which tracks how topological features (connected components, loops, voids) appear and disappear as a scale parameter varies. Given a point cloud sampled from some underlying space, one constructs a family of *simplicial complexes* at different scales. At small scales, each point is isolated. As the scale increases, points connect to form edges, triangles, and higher-dimensional simplices. Features that persist across a wide range of scales likely reflect genuine structure in the data. Features that appear and disappear quickly are likely noise. The output is a *persistence diagram* or *barcode* that can be analyzed using machine learning methods. Applications include shape classification, sensor network analysis, and biological data interpretation. Researchers have developed rigorous foundations for TDA, including *stability theorems*[^8] showing that small perturbations of data produce small perturbations of persistence diagrams. ## Sphere Packing Topological data analysis shows how abstract ideas become practical tools. A more classical question about high-dimensional space has ancient roots but only recent answers: how densely can identical spheres be packed? In three dimensions, the answer is the *Kepler conjecture*, proved by Thomas Hales in 1998 (and formally verified in 2014): the familiar pyramid stacking of oranges achieves optimal density $\pi/(3\sqrt{2}) \approx 0.74$. In high dimensions, the problem becomes far more difficult. Maryna Viazovska won the 2022 Fields Medal for proving that the *$E_8$ lattice* is optimal in 8 dimensions and (with collaborators) that the *Leech lattice* is optimal in 24 dimensions. Her proof used *modular forms*, functions with extraordinary symmetry properties that appear throughout number theory. Beyond these special dimensions, even determining whether *lattice packings* are optimal among all packings remains open. Upper bounds on packing density are known from geometric arguments, but proving that specific constructions achieve these bounds requires new techniques. The problem connects to number theory (through lattices and modular forms), coding theory (sphere packings correspond to error-correcting codes), and physics (through crystal structures and information transmission). Such problems reveal what 2025 demonstrated across mathematics: deep results require sustained effort, and the boundary between known and unknown shifts slowly. ## The Unfinished Canvas Geometric Langlands techniques may extend to the arithmetic case, the central goal of the broader program. That extension may take decades. The geometric proof is a scaffold, not a destination. The abc conjecture may remain controversial until fundamental new ideas emerge, or until the community accepts what already exists. Sphere packing admits proofs in dimensions 1, 2, 3, 8, and 24, and nowhere else. These isolated solutions hint at deep structure that remains unexplained. AI systems assist with proof search and verification, but generating novel mathematical insights remains largely beyond reach. The spark of conjecture is still human. Mathematics advances not in flashes of insight alone, but in patient construction over lifetimes. The questions that remain are as old as the field itself. The methods for answering them are not. Somewhere, a graduate student is beginning work on a problem that will define a career, and perhaps a generation. --- **Citations**: [1] "Dennis Gaitsgory wins 2025 Breakthrough Prize in mathematics." Max Planck Society, September 2024. [2] Joshi, K. "Construction of Arithmetic Teichmüller Spaces IV: Proof of the abc-conjecture." arXiv:2403.10430, revised February 2025. [3] Glimm, J., and Petrillo, J. "Smooth Solutions of the Navier-Stokes Equation." arXiv:2505.13816, May 2025. [4] Rao, A. "The Story of Sunflowers." arXiv:2509.14790, September 2025. [5] Ghadimi, M. "Heuristic Bounded Prime Gaps via a Chaotic Multidimensional Sieve and Random Matrix Theory." arXiv:2507.17986, July 2025. [6] Evans-Lee, K., and Lamb, K. "Identification of Anomalous Geospatial Trajectories via Persistent Homology." arXiv:2410.03889, October 2024. [7] Degenne, R., et al. "Formalization of Brownian motion in Lean." arXiv:2511.20118, November 2025. [8] Ospanov, A., et al. "APOLLO: Automated LLM and Lean Collaboration for Advanced Formal Reasoning." arXiv:2505.05758, May 2025. **Footnotes**: [^1]: The Breakthrough Prize is the largest monetary award in mathematics at $3 million, exceeding the Nobel Prize (which is not awarded in mathematics) and the Fields Medal. [^2]: Mathlib is a community-maintained library that formalizes mathematics in Lean. It has grown to millions of lines of code covering analysis, algebra, topology, and number theory. [^3]: The miniF2F benchmark contains 488 mathematical problem statements formalized in Lean and Isabelle, drawn from mathematical olympiads and undergraduate courses. [^4]: Joshi's approach differs from Mochizuki's in using more standard mathematical language and constructions, potentially making it more accessible to verification. Whether the approaches are equivalent remains disputed. [^5]: Millennium Prize Problems carry a $1 million prize from the Clay Mathematics Institute. Six of the original seven remain unsolved; the Poincaré conjecture was resolved by Grigori Perelman in 2003. [^6]: The robust sunflower lemma relaxes the requirement that the core be exactly the same for all pairs, allowing it to differ in "pseudorandom" ways. This weaker condition suffices for most applications. [^7]: The Elliott-Halberstam conjecture concerns the distribution of primes in arithmetic progressions. Its truth would imply significant improvements in prime gap bounds. [^8]: Stability is crucial for applications because real data is noisy. Without stability, small measurement errors could produce wildly different topological invariants.

Read the full article in Blueleaf.

Get the complete story with rich visuals, audio narration, and the context you need to understand this breakthrough.

Download on the App Store