Deep neural networks (DNNs) are now widely used in many sectors of our society. This phenomenon also means that if these DNNs contain faults, they will have profound adverse impacts on our daily lives. Thus, DNNs have to be comprehensively tested for ''correctness'' before they are released for use.Since such testing involves the use of a DNN test set, the comprehensiveness of this test set is of utmost importance. Until now, many researchers have proposed their own neuron-coverage (NC) metrics to measure the comprehensiveness of a DNN test set. However, their studies solely focused on those DNN testing scenarios with the presence of a test oracle. We observed that, in reality, there are many DNN testing scenarios where a test oracle does not exist and, therefore, the results of all previous studies may be inapplicable to these testing scenarios.Inspired by this observation, we have performed an empirical study to investigate the usefulness of some common and major NC metrics in terms of correlation analysis and invariability analysis. Our experiment results showed that, on the one hand, some NC metrics are useful measures of DNN test-set comprehensiveness (in terms of correlation analysis), but on the other hand, these metrics are not robust enough (in terms of invariability analysis).