Recently, developed a new neural network architecture based on ℓ ∞ -distance functions, which naturally possesses certified robustness by its construction. Despite the excellent theoretical properties, the model so far can only achieve comparable performance to conventional networks. In this paper, we significantly boost the certified robustness of ℓ ∞ -distance nets through a careful analysis of its training process. In particular, we show the ℓ p -relaxation, a crucial way to overcome the non-smoothness of the model, leads to an unexpected large Lipschitz constant at the early training stage. This makes the optimization insufficient using hinge loss and produces sub-optimal solutions. Given these findings, we propose a simple approach to address the issues above by using a novel objective function which combines a scaled cross-entropy loss with clipped hinge loss. Our experiments show that using the proposed training strategy, the certified accuracy of ℓ ∞ -distance net can be dramatically improved from 33.30% to 40.06% on CIFAR-10 (ǫ = 8/255), meanwhile significantly outperforming other approaches in this area. Such result clearly demonstrates the effectiveness and potential of ℓ ∞ -distance net for certified robustness.
It is well-known that standard neural networks, even with a high classification accuracy, are vulnerable to small ∞ -norm bounded adversarial perturbations. Although many attempts have been made, most previous works either can only provide empirical verification of the defense to a particular attack method, or can only develop a certified guarantee of the model robustness in limited scenarios. In this paper, we seek for a new approach to develop a theoretically principled neural network that inherently resists ∞ perturbations. In particular, we design a novel neuron that uses ∞ -distance as its basic operation (which we call ∞ -dist neuron), and show that any neural network constructed with ∞ -dist neurons (called ∞ -dist net) is naturally a 1-Lipschitz function with respect to ∞ -norm. This directly provides a rigorous guarantee of the certified robustness based on the margin of prediction outputs. We also prove that such networks have enough expressive power to approximate any 1-Lipschitz function with robust generalization guarantee. Our experimental results show that the proposed network is promising. Using ∞ -dist nets as the basic building blocks, we consistently achieve state-ofthe-art performance on commonly used datasets: 93.09% certified accuracy on MNIST ( = 0.3), 79.23% on Fashion MNIST ( = 0.1) and 35.10% on CIFAR-10 ( = 8/255).∞ perturbation. In particular, we propose a novel neuron called ∞ -dist neuron. Unlike the standard neuron design that uses a linear transformation followed by a non-linear activation, the ∞ -dist neuron is purely based on computing
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.