POPL 2026
Sun 11 - Sat 17 January 2026 Rennes, France

This program is tentative and subject to change.

Wed 14 Jan 2026 17:00 - 17:25 at Réfectoire - Machine Learning

We propose a novel approach that leverages Bayesian program analysis to guide large-scale target-guided greybox fuzzing (LTGF). LTGF prioritizes program locations (targets) that are likely to contain bugs and applies directed mutation towards high-priority targets. However, existing LTGF approaches suffer from coarse and heuristic target prioritization strategies, and lack a systematic design to fully exploit feedback from the fuzzing process. We systematically define this prioritization process as the reachable fuzzing targets problem. Bayesian program analysis attaches probabilities to analysis rules and transforms the analysis results into a Bayesian model. By redefining the semantics of Bayesian program analysis, we enable the prediction of whether each target is reachable by the fuzzer, and dynamically adjust the predictions based on fuzzer feedback. On the one hand, Bayesian program analysis builds Bayesian models based on program semantics, enabling systematic and fine-grained prioritization. On the other hand, Bayesian program analysis systematically learns feedback from the fuzzing process, making its guidance adaptive. Moreover, this combination extends the application of Bayesian program analysis from alarm ranking to fully automated bug discovery. We implement our approach and evaluate it against several state-of-the-art fuzzers. On a suite of real-world programs, our approach discovers $3.25 \times$ to $13 \times$ more unique bugs compared to baselines. In addition, our approach identifies 39 previously unknown bugs in well-tested programs, 30 of which have been assigned CVEs.

This program is tentative and subject to change.

Wed 14 Jan

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

16:10 - 17:25
Machine LearningPOPL at Réfectoire
16:10
25m
Talk
ChopChop: A Programmable Framework for Semantically Constraining the Output of Language Models
POPL
Shaan Nagy University of California at San Diego, Timothy Zhou , Nadia Polikarpova University of California at San Diego, Loris D'Antoni University of California at San Diego
DOI
16:35
25m
Talk
Compiling to Linear Neurons
POPL
Joey Velez-Ginorio University of Pennsylvania, Nada Amin Harvard University, Konrad Kording University of Pennsylvania, Steve Zdancewic University of Pennsylvania
DOI
17:00
25m
Talk
Fuzzing Guided by Bayesian Program Analysis
POPL
Yifan Zhang Peking University, Xin Zhang Peking University
DOI