As deep learning systems are widely adopted in safety-and securitycritical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences. In this paper, we initiate the first study of leveraging physical fault injection attacks on Deep Neural Networks (DNNs), by using laser injection technique on embedded systems. In particular, our exploratory study targets four widely used activation functions in DNNs development, that are the general main building block of DNNs that creates non-linear behaviors -ReLu, softmax, sigmoid, and tanh. Our results show that by targeting these functions, it is possible to achieve a misclassification by injecting faults into the hidden layer of the network. Such result can have practical implications for realworld applications, where faults can be introduced by simpler means (such as altering the supply voltage).
Profiled side-channel attacks represent a practical threat to digital devices, thereby having the potential to disrupt the foundation of e-commerce, the Internet of Things (IoT), and smart cities. In the profiled side-channel attack, the adversary gains knowledge about the target device by getting access to a cloned device. Though these two devices are different in realworld scenarios, yet, unfortunately, a large part of research works simplifies the setting by using only a single device for both profiling and attacking. There, the portability issue is conveniently ignored to ease the experimental procedure. In parallel to the above developments, machine learning techniques are used in recent literature, demonstrating excellent performance in profiled side-channel attacks. Again, unfortunately, the portability is neglected. In this paper, we consider realistic side-channel scenarios and commonly used machine learning techniques to evaluate the influence of portability on the efficacy of an attack. Our experimental results show that portability plays an important role and should not be disregarded as it contributes to a significant overestimate of the attack efficiency, which can easily be an order of magnitude size. After establishing the importance of portability, we propose a new model called the Multiple Device Model (MDM) that formally incorporates the device to device variation during a profiled side-channel attack. We show through experimental studies how machine learning and MDM significantly enhance the capacity for practical side-channel attacks. More precisely, we demonstrate how MDM can improve the performance of an attack by order of magnitude, completely negating the influence of portability.
Modern and future substations are aimed to be more interconnected, leveraging communication standards like IEC 61850-9-2, and associated abstract data models and communication services like GOOSE, MMS, SMV. Such interconnection would enable fast and secure data transfer, sharing of the analytics information for various purposes like wide area monitoring, faster outage recovery, blackout prevention, distributed state estimation, etc. This would require strong focus on communication security, both at system level as well as at embedded device level. Although communication level security is dealt in IEC 62351, implementation attack on embedded system is not considered. Since embedded system makes the core of smart grid, in this paper, we take a deeper look into impact of implementation attacks on substation security. An overview of potential exploits is first provided. This is followed by a case study, where implementation attacks like malicious fault injection attacks and hardware Trojan are used to compromise a substation level intelligent electronic device (IED). The studied scenario extend implementation attacks beyond its usual exploit of confidentiality to affect power grid integrity and availability.
Machine learning has become mainstream across industries. Numerous examples proved the validity of it for security applications. In this work, we investigate how to reverse engineer a neural network by using only power side-channel information. To this end, we consider a multilayer perceptron as the machine learning architecture of choice and assume a non-invasive and eavesdropping attacker capable of measuring only passive side-channel leakages like power consumption, electromagnetic radiation, and reaction time.We conduct all experiments on real data and common neural net architectures in order to properly assess the applicability and extendability of those attacks. Practical results are shown on an ARM CORTEX-M3 microcontroller. Our experiments show that the side-channel attacker is capable of obtaining the following information: the activation functions used in the architecture, the number of layers and neurons in the layers, the number of output classes, and weights in the neural network. Thus, the attacker can effectively reverse engineer the network using sidechannel information.Next, we show that once the attacker has the knowledge about the neural network architecture, he/she could also recover the inputs to the network with only a single-shot measurement. Finally, we discuss several mitigations one could use to thwart such attacks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.