POPL 2026
Sun 11 - Sat 17 January 2026 Rennes, France
Wed 14 Jan 2026 16:35 - 17:00 at Réfectoire - Machine Learning Chair(s): Satnam Singh

We don’t program neural networks directly. Instead, we rely on an indirect style where learning algorithms, like gradient descent, determine a neural network’s function by learning from data. This indirect style is often a virtue; it empowers us to solve problems that were previously impossible. But it lacks discrete structure. We can’t compile most algorithms into a neural network—even if these algorithms could help the network learn. This limitation occurs because discrete algorithms are not obviously differentiable, making them incompatible with the gradient-based learning algorithms that determine a neural network’s function. To address this, we introduce Cajal: a typed, higher-order and linear programming language intended to be a minimal vehicle for exploring a direct style of programming neural networks. We prove Cajal programs compile to linear neurons, allowing discrete algorithms to be expressed in a differentiable form compatible with gradient-based learning. With our implementation of Cajal, we conduct several experiments where we link these linear neurons against other neural networks to determine part of their function prior to learning. Linking with these neurons allows networks to learn faster, with greater data-efficiency, and in a way that’s easier to debug. A key lesson is that linear programming languages provide a path towards directly programming neural networks, enabling a rich interplay between learning and the discrete structures of ordinary programming.

Wed 14 Jan

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

16:10 - 17:25
Machine LearningPOPL at Réfectoire
Chair(s): Satnam Singh Harmonic
16:10
25m
Talk
ChopChop: A Programmable Framework for Semantically Constraining the Output of Language Models
POPL
Shaan Nagy University of California at San Diego, Timothy Zhou University of California, San Diego, Nadia Polikarpova University of California at San Diego, Loris D'Antoni University of California at San Diego
DOI
16:35
25m
Talk
Compiling to Linear Neurons
POPL
Joey Velez-Ginorio University of Pennsylvania, Nada Amin Harvard University, Konrad Kording University of Pennsylvania, Steve Zdancewic University of Pennsylvania
DOI
17:00
25m
Talk
Fuzzing Guided by Bayesian Program Analysis
POPL
Yifan Zhang Peking University, Xin Zhang Peking University
DOI Pre-print