POPL 2026
Sun 11 - Sat 17 January 2026 Rennes, France

The Languages for Inference (LAFI) workshop aims to bring programming-language and machine-learning researchers together to advance all aspects of languages for inference: languages that offer built-in support for expressing probabilistic or differentiable models, and methods for inference and optimization over them, as programs, to ease reasoning, use, and reuse.

Topics include but are not limited to:

  • Design of programming languages for probabilistic inference and/or differentiable programming
  • Inference algorithms for probabilistic programming languages, including ones that incorporate automatic differentiation
  • Automatic differentiation algorithms for differentiable programming languages
  • Probabilistic generative modelling and inference
  • Variational and differentiable modeling and inference
  • Semantics (axiomatic, operational, denotational, games, etc) and types for probabilistic and/or differentiable programming
  • Efficient and correct implementation
  • Applications of probabilistic and/or differentiable programming

This year, LAFI is sponsored by BASIS (https://www.basis.ai/).

Basis logo

Plenary

This program is tentative and subject to change.

You're viewing the program in a time zone which is different from your device's time zone change time zone

Sun 11 Jan

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

09:00 - 10:30
First SessionLAFI at Salle 13
09:00
5m
Day opening
Welcome
LAFI
Hugo Paquet Inria, École Normale Supérieure, Alexander K. Lew Yale University
09:06
20m
Industry talk
Basis — A Programming Languages Take on Principled Foundations for AI
LAFI
09:27
10m
Talk
Towards an Equational Calculus of Interventions
LAFI
Shubh Agrawal Northeastern University, Jialu Bao Northeastern University, Steven Holtzen Northeastern University
09:39
10m
Talk
Typed Abstractions for Causal Probabilistic Programming
LAFI
Theo Wang University of Cambridge, University of Oxford, Dario Stein University of Oxford, Eli Bingham Broad Institute, Jack Feser Basis, Ohad Kammar University of Edinburgh, Michael Lee University of Cambridge, UK, Jeremy Yallop University of Cambridge
File Attached
09:51
10m
Talk
Towards Representation Agnostic Probabilistic Programming
LAFI
10:03
10m
Talk
A Design for Massively Parallel Gibbs Sampling on the GPU via Static and Dynamic Analysis of Probabilistic Programs
LAFI
Matin Ghavami Massachusetts Institute of Technology, Martin C. Rinard Massachusetts Institute of Technology, Vikash Mansinghka Massachusetts Institute of Technology
10:15
10m
Talk
A Design Proposal for GraPPL: Probabilistic Programming with Low-Level, High-Performance GPU Programmable Inference
LAFI
Karen Chung Massachusetts Institute of Technology, Elias Rojas Collins MIT, McCoy Reynolds Becker MIT, Mathieu Huot MIT, Vikash Mansinghka Massachusetts Institute of Technology
10:30 - 11:00
10:30
30m
Coffee break
Break
POPL Catering

11:00 - 12:30
Second SessionLAFI at Salle 13
11:00
45m
Keynote
Keynote
LAFI
11:45
10m
Talk
Monte Carlo Analysis of Probabilistic Programs
LAFI
A. Zhao Princeton, David Walker Princeton University
11:56
10m
Talk
Verifying Sampling Algorithms via Distributional Invariants
LAFI
Daniel Zilken , Tobias Winkler RWTH Aachen University, Kevin Batz RWTH Aachen University, Joost-Pieter Katoen RWTH Aachen University
File Attached
12:07
10m
Talk
Sequential Monte Carlo Program Synthesis with Refinement Proposals
LAFI
Maddy Bowers Massachusetts Institute of Technology, Mauricio Barba da Costa MIT, Xiaoyan Wang Massachusetts Institute of Technology, Joshua B. Tenenbaum Massachusetts Institute of Technology, Vikash Mansinghka Massachusetts Institute of Technology, Armando Solar-Lezama Massachusetts Institute of Technology, Alexander K. Lew Yale University
12:18
10m
Talk
A Word Sampler for Well-Typed Functions
LAFI
12:30 - 14:00
12:30
90m
Lunch
Lunch
POPL Catering

14:00 - 15:30
Third SessionLAFI at Salle 13
14:00
10m
Talk
Towards Compiling Higher-Order Programs to Bayesian Networks
LAFI
Claudia Faggian CNRS, Université Paris Cité, Gabriele Vanoni IRIF, Université Paris Cité
14:12
10m
Talk
On Contextual Distances in Randomized Programming: Amplification and Lower Bounds
LAFI
14:24
10m
Talk
Nominal Semantics for First-class Automatic Differentiation
LAFI
Jack Czenszak Yale University, Alexander K. Lew Yale University
14:36
10m
Talk
Semantic Foundations for Laziness in Discrete Probabilistic Programming
LAFI
Simon Castellan University of Rennes; Inria; CNRS; IRISA, Tom Hirschowitz Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LAMA, 73000 Chambéry, Hugo Paquet Inria, École Normale Supérieure
14:48
10m
Talk
Incremental Density Computation for Efficient Programmable Inference
LAFI
Fabian Zaiser MIT, Vikash Mansinghka Massachusetts Institute of Technology, Alexander K. Lew Yale University
15:00
10m
Talk
Generating Functions Meet Occupation Measures: Invariant Synthesis for Probabilistic Loops
LAFI
Kevin Batz , Adrian Gallus RWTH Aachen University, Darion Haase RWTH Aachen University, Benjamin Lucien Kaminski Saarland University; University College London, Joost-Pieter Katoen RWTH Aachen University, Lutz Klinkenberg RWTH Aachen University, Tobias Winkler RWTH Aachen University
15:12
10m
Talk
Probabilistic Programming Meets Automata Theory: Exact Inference using Weighted Automata
LAFI
Dominik Geißler TU Berlin, Germany, Tobias Winkler RWTH Aachen University
File Attached
15:30 - 16:00
15:30
30m
Coffee break
Break
POPL Catering

16:00 - 18:00
Fourth SessionLAFI at Salle 13
16:00
10m
Talk
Multi-Agent Systems for Traceable Bayesian Workflow
LAFI
Xianda Sun University of Cambridge, Andrew D. Gordon Cogna and University of Edinburgh, Hong Ge University of Cambridge
16:12
10m
Talk
Grammar-Constrained LLM Generation for Reliable and Efficient Probabilistic Program Synthesis
LAFI
Madhav Kanda University of Illinois Urbana-Champaign, Shubham Ugare Meta, Sasa Misailovic University of Illinois at Urbana-Champaign
16:24
10m
Talk
Language-Model Probabilistic Programming for Improving Autoformalization via Cycle Consistency and Incremental Type-Checking
LAFI
Mauricio Barba da Costa MIT, Fabian Zaiser MIT, Katherine Collins MIT, Romir Patel MIT, Timothy O'Donnell , Alexander K. Lew Yale University, Joshua B. Tenenbaum Massachusetts Institute of Technology, Vikash K. Mansinghka Massachusetts Institute of Technology, Cameron Freer Massachusetts Institute of Technology
16:35
80m
Poster
Poster Session
LAFI

Accepted Papers

Title
A Design for Massively Parallel Gibbs Sampling on the GPU via Static and Dynamic Analysis of Probabilistic Programs
LAFI
A Design Proposal for GraPPL: Probabilistic Programming with Low-Level, High-Performance GPU Programmable Inference
LAFI
A Word Sampler for Well-Typed Functions
LAFI
Generating Functions Meet Occupation Measures: Invariant Synthesis for Probabilistic Loops
LAFI
Grammar-Constrained LLM Generation for Reliable and Efficient Probabilistic Program Synthesis
LAFI
Incremental Density Computation for Efficient Programmable Inference
LAFI
Language-Model Probabilistic Programming for Improving Autoformalization via Cycle Consistency and Incremental Type-Checking
LAFI
Monte Carlo Analysis of Probabilistic Programs
LAFI
Multi-Agent Systems for Traceable Bayesian Workflow
LAFI
Nominal Semantics for First-class Automatic Differentiation
LAFI
On Contextual Distances in Randomized Programming: Amplification and Lower Bounds
LAFI
Probabilistic Programming Meets Automata Theory: Exact Inference using Weighted Automata
LAFI
File Attached
Semantic Foundations for Laziness in Discrete Probabilistic Programming
LAFI
Sequential Monte Carlo Program Synthesis with Refinement Proposals
LAFI
Towards an Equational Calculus of Interventions
LAFI
Towards Compiling Higher-Order Programs to Bayesian Networks
LAFI
Towards Representation Agnostic Probabilistic Programming
LAFI
Typed Abstractions for Causal Probabilistic Programming
LAFI
File Attached
Verifying Sampling Algorithms via Distributional Invariants
LAFI
File Attached

Call for Extended Abstracts

Call for Extended Abstracts — LAFI 2026

Submission deadline: October 30, 2025, AoE

Submission website: https://lafi26.hotcrp.com

We invite the submission of extended abstracts (2 pages + references + optional appendices) to the Languages for Inference (LAFI) workshop, colocated with POPL 2026.

LAFI aims to bring programming-language and machine-learning researchers together to advance all aspects of languages for inference. Topics include but are not limited to:

  • Design of programming languages for probabilistic inference and/or differentiable programming
  • Inference algorithms for probabilistic programming languages, including ones that incorporate automatic differentiation
  • Automatic differentiation algorithms for differentiable programming languages
  • Probabilistic generative modelling and inference
  • Variational and differentiable modelling and inference
  • Semantics (axiomatic, operational, denotational, games, etc) and types for probabilistic and/or differentiable programming
  • Efficient and correct implementation
  • Applications of probabilistic and/or differentiable programming

Dissemination of research. The workshop is informal, and our goal is to foster collaboration and establish a shared foundation for research on languages for inference. The proceedings will not be a formal or archival publication, and we expect to spend only a portion of the workshop day on traditional research talks.

Format. Uploads must be in PDF. Although no specific format is required, we suggest using an ACM template (either single- or double-column) in review mode, which adds line number annotations that reviewers can refer to when giving feedback.

Page limit: 2 pages of main content, excluding references and appendices. (Please note that reviewers are not required or expected to read appendices.)

Anonymity: submissions should be anonymized for peer review.

In line with the SIGPLAN Republication Policy, inclusion of extended abstracts in the program should not preclude later formal publication.

We strive to create an inclusive environment that does not demand traveling for presenters or participants.