Fuzzer. The competing fuzzer must link against a libFuzzer-style function called
int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size);
which accepts an array Data
of size Size
. The objective of the fuzzer is to generate as many unique crashes as possible. The fuzzer can leverage any type of feedback from the executed program as guidance toward this objective.
For our competition, we use Google’s FuzzBench fuzzer evaluation platform [ESEC/FSE’21]. FuzzBench provides an easy API for integrating fuzzers to contribute to painless, rigorous fuzzing research evaluations against a set of benchmark programs selected from real-world open-source projects and a set of state-of-the-art fuzzers. FuzzBench allows conducting experiments locally and publicly. Once an experiment is concluded, it generates an evaluation report with graphs and statistical tests to measure and visualize the effect size and statistical significance of the comparison across the chosen fuzzers and benchmark programs.
Fuzzer Integration. From the perspective of the fuzzer, FuzzBench provides generic access to:
LLVMFuzzerTestOneInput
into executions of the programs,Related work
Registration (by 1 December 2023).
\documentclass[10pt,conference]{IEEEtran}
without including the compsoc or compsocconf options) describing the technology and ideas behind the fuzzer you want to submit to the competition. The tool report should be sent to sbft24fuzzcomp@googlegroups.com as part of the registration. Please format your email subject as ‘[Report submission] {Fuzzer Name}’.Preliminaries. First, we encourage every participant to get familiar with the FuzzBench infrastructure. Create a private fork of the FuzzBench repository. Install prerequisites. You can now run one fuzzer (e.g., AFL
) on one program (e.g., libpng-1.2.56
) using make run-afl-libpng-1.2.56
. Get familiar with fuzzer integration to integrate your own fuzzer into FuzzBench. Get familiar with the local setup to run multiple fuzzers, including yours, on multiple programs and get coverage and bug-finding reports.
Preparation. The participants are encouraged to use the programs and infrastructure available at FuzzBench for their local evaluation. The participants can submit new fuzzers, or extend existing fuzzers with their own novel ideas. The participants are allowed to modify the compilation of the programs to inject execution feedback for the fuzzer. During development, regularly make sure that make presubmit
succeeds and it can build on benchmark programs. It can be quite pedantic and you don’t want to be failed due to trivial errors.
Pull Request Deadline (on 28 December 2023).
The participants publicly integrate their fuzzers into FuzzBench by submitting a Pull Request and getting it approved for integration. We will only consider fuzzers that passed all CI tests in the PR and that run for at least 30 minutes.
The primary contact of the participants’ team submits a short tool report (up to 2 pages in IEEE conference format; \documentclass[10pt,conference]{IEEEtran}
without including the compsoc or compsocconf options) describing the technology and ideas behind the submitted fuzzer and the experience of integrating into FuzzBench.
Results. Finally, the winners will be announced live during the workshop in April.
Top-performing participants will be eligible to be considered for a new OSS-Fuzz FuzzBench integration reward (up to $11,337, depending on impact).
The fuzzing competition is organized jointly by the Software Engineering Group of University of Sydney lead by Rahul Gopinath, Huaming Chen, and Xi Wu, and Philipp Görz, and Joschua Schilling of CISPA Helmholtz Center for Information Security, Germany