Deep neural networks (DNNs) have achieved remarkable performance across a wide area of applications. However, they are vulnerable to adversarial examples, which motivates the adversarial defense. By adopting simple evaluation metrics, most of the current defenses only conduct incomplete evaluations, which are far from providing comprehensive understandings of the limitations of these defenses. Thus, most proposed defenses are quickly shown to be attacked successfully, which result in the "arm race" phenomenon between attack and defense. To mitigate this problem, we establish a model robustness evaluation framework containing a comprehensive, rigorous, and coherent set of evaluation metrics, which could fully evaluate model robustness and provide deep insights into building robust models. With 23 evaluation metrics in total, our framework primarily focuses on the two key factors of adversarial learning (i.e., data and model). Through neuron coverage and data imperceptibility, we use data-oriented metrics to measure the integrity of test examples; by delving into model structure and behavior, we exploit model-oriented metrics to further evaluate robustness in the adversarial setting. To fully demonstrate the effectiveness of our framework, we conduct large-scale experiments on multiple datasets including CIFAR-10 and SVHN using different models and defenses with our open-source platform AISafety 1 . Overall, our paper aims to provide a comprehensive evaluation framework which could demonstrate detailed inspections of the model robustness, and we hope that our paper can inspire further improvement to the model robustness.
For the emerging autonomous swarm technology, from the perspective of systems science and Systems Engineering (SE), there must be novel methodologies and elements to aggregate multiple systems into a group, which distinguish the general components with specific functions. Here, we expect to provide a presentation of their existence in swarm development processes. The inspiration for our approach originates from the integration of swarm ontology, multiparadigm modeling, multiagent systems, cyber-physical systems, etc. Therefore, we chose the model-driven architecture as a framework to provide a method of model representation across the multiple levels of abstraction and composition. The autonomous strategic mechanism was defined and formed in parallel with Concept of Operations (ConOps) analysis and systems design, so as to effectively solve the cognitive problem of emergence caused by nonlinear causation among individual and whole behaviors. Our approach highlights the use of model-based processes and their artifacts in the swarm mechanism to integrate operational and functional models, which means connecting the macro- and micro-aspects in formalism to synthesize a whole with its expected goals, and then to verify and validate within an L-V-C simulation environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.