Recently, there has been a significant growth of interest in applying software engineering techniques for the quality assurance of deep learning (DL) systems. One popular direction is deep learning testing, i.e., given a property of test, defects of DL systems are found either by fuzzing or guided search with the help of certain testing metrics. However, recent studies have revealed that the neuron coverage metrics, commonly used by most existing DL testing approaches, are not necessarily correlated with model quality (e.g., robustness, the most studied model property), and are also not an effective measurement on the confidence of the model quality after testing. In this work, we address this gap by proposing a novel testing framework called
QuoTe
(i.e.,
Qu
ality-
o
riented
Te
sting). A key part of
QuoTe
is a quantitative measurement on 1) the value of each test case in enhancing the model property of interest (often via retraining), and 2) the convergence quality of the model property improvement.
QuoTe
utilizes the proposed metric to automatically select or generate valuable test cases for improving model quality. The proposed metric is also a lightweight yet strong indicator of how well the improvement converged. Extensive experiments on both image and tabular datasets with a variety of model architectures confirm the effectiveness and efficiency of
QuoTe
in improving DL model quality, i.e., robustness and fairness. As a generic quality-oriented testing framework, future adaptions can be made to other domains (e.g., text) as well as other model properties.