Proceedings of the 2019 ACM Asia Conference on Computer and Communications Security 2019
DOI: 10.1145/3321705.3329845
|View full text |Cite
|
Sign up to set email alerts
|

A Feature-Oriented Corpus for Understanding, Evaluating and Improving Fuzz Testing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…First, some fuzzers may be difficult or complicated to be used directly. For instance, Zhu et al [15] stated that they could not appropriately run Driller [17], T-Fuzz [7] and VUzzer [8]. Second, we find that there are numerous flaws (e.g., incorrect judgment on crash, abnormal behaviors during the fuzzing process) with the implementation of many fuzzers, which may cause negative impacts on their performance.…”
Section: Motivation Of Unifuzzmentioning
confidence: 74%
See 1 more Smart Citation
“…First, some fuzzers may be difficult or complicated to be used directly. For instance, Zhu et al [15] stated that they could not appropriately run Driller [17], T-Fuzz [7] and VUzzer [8]. Second, we find that there are numerous flaws (e.g., incorrect judgment on crash, abnormal behaviors during the fuzzing process) with the implementation of many fuzzers, which may cause negative impacts on their performance.…”
Section: Motivation Of Unifuzzmentioning
confidence: 74%
“…Conducting comprehensive and pragmatic evaluations of fuzzers entails overcoming multiple important challenges. First, although many fuzzers have been open sourced, their usability in practice is often limited, as reported by recent research [7,15], which results in reproducibility issues, impeding comparison. Thus, it is necessary to test and enhance fuzzers' usability.…”
Section: Introductionmentioning
confidence: 99%
“…We integrated GSPR into two popular coverage-guided fuzzers, AFL and AFLFast, which we chose because they are representative of coverage-guided fuzzers. They are frequently adopted by other works [4] [9][10][11][12]. We evaluated 7 real open-source applications.…”
Section: Gspr and Repetition Ratementioning
confidence: 99%
“…The evaluation of fuzzing is usually conducted separately from the detection stage. However, we consider the evaluation as a part of the fuzzing processes because a proper evaluation can help improve the performance of fuzzing [215]. A proper evaluation includes efective experimental corpus [215], fair evaluation environment/platform [30,104,126], reasonable fuzzing time [17,20], and comprehensive comparison metrics [96,104].…”
Section: Evaluation Theorymentioning
confidence: 99%
“…However, we consider the evaluation as a part of the fuzzing processes because a proper evaluation can help improve the performance of fuzzing [215]. A proper evaluation includes efective experimental corpus [215], fair evaluation environment/platform [30,104,126], reasonable fuzzing time [17,20], and comprehensive comparison metrics [96,104]. Although these researches have made eforts on proper evaluations, it is still an open question about how to evaluate techniques (i.e., the fuzzing algorithms) instead of implementations (i.e., the code that implements the algorithms) [18].…”
Section: Evaluation Theorymentioning
confidence: 99%