SoK: Prudent Evaluation Practices for Fuzzing
CoRR(2024)
Abstract
Fuzzing has proven to be a highly effective approach to uncover software bugs
over the past decade. After AFL popularized the groundbreaking concept of
lightweight coverage feedback, the field of fuzzing has seen a vast amount of
scientific work proposing new techniques, improving methodological aspects of
existing strategies, or porting existing methods to new domains. All such work
must demonstrate its merit by showing its applicability to a problem, measuring
its performance, and often showing its superiority over existing works in a
thorough, empirical evaluation. Yet, fuzzing is highly sensitive to its target,
environment, and circumstances, e.g., randomness in the testing process. After
all, relying on randomness is one of the core principles of fuzzing, governing
many aspects of a fuzzer's behavior. Combined with the often highly difficult
to control environment, the reproducibility of experiments is a crucial concern
and requires a prudent evaluation setup. To address these threats to validity,
several works, most notably Evaluating Fuzz Testing by Klees et al., have
outlined how a carefully designed evaluation setup should be implemented, but
it remains unknown to what extent their recommendations have been adopted in
practice. In this work, we systematically analyze the evaluation of 150 fuzzing
papers published at the top venues between 2018 and 2023. We study how existing
guidelines are implemented and observe potential shortcomings and pitfalls. We
find a surprising disregard of the existing guidelines regarding statistical
tests and systematic errors in fuzzing evaluations. For example, when
investigating reported bugs, ...
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined