The majority of machine learning methodologies operate with the assumption that their environment is benign. However, this assumption does not always hold, as it is often advantageous to adversaries to maliciously modify the training (poisoning attacks) or test data (evasion attacks). Such attacks can be catastrophic given the growth and the penetration of machine learning applications in society. Therefore, there is a need to secure machine learning enabling the safe adoption of it in adversarial cases, such as spam filtering, malware detection, and biometric recognition. This paper presents a taxonomy and survey of attacks against systems that use machine learning. It organizes the body of knowledge in adversarial machine learning so as to identify the aspects where researchers from different fields can contribute to. The taxonomy identifies attacks which share key characteristics and as such can potentially be addressed by the same defense approaches. Thus, the proposed taxonomy makes it easier to understand the existing attack landscape towards developing defence mechanisms, which are not investigated in this survey. The taxonomy is also leveraged to identify open problems that can lead to new research areas within the field of adversarial machine learning.
Electronic health record (EHR) management systems require the adoption of effective technologies when health information is being exchanged. Current management approaches often face risks that may expose medical record storage solutions to common security attack vectors. However, healthcare-oriented blockchain solutions can provide a decentralized, anonymous and secure EHR handling approach. This paper presents PREHEALTH, a privacy-preserving EHR management solution that uses distributed ledger technology and an Identity Mixer (Idemix). The paper describes a proof-of-concept implementation that uses the Hyperledger Fabric’s permissioned blockchain framework. The proposed solution is able to store patient records effectively whilst providing anonymity and unlinkability. Experimental performance evaluation results demonstrate the scheme’s efficiency and feasibility for real-world scale deployment.
A large number of smart devices in Internet of Things (IoT) environments communicate via different messaging protocols. Message Queuing Telemetry Transport (MQTT) is a widely used publish–subscribe-based protocol for the communication of sensor or event data. The publish–subscribe strategy makes it more attractive for intruders and thus increases the number of possible attacks over MQTT. In this paper, we proposed a Deep Neural Network (DNN) for intrusion detection in the MQTT-based protocol and also compared its performance with other traditional machine learning (ML) algorithms, such as a Naive Bayes (NB), Random Forest (RF), k-Nearest Neighbour (kNN), Decision Tree (DT), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRUs). The performance is proved using two different publicly available datasets, including (1) MQTT-IoT-IDS2020 and (2) a dataset with three different types of attacks, such as Man in the Middle (MitM), Intrusion in the network, and Denial of Services (DoS). The MQTT-IoT-IDS2020 contains three abstract-level features, including Uni-Flow, Bi-Flow, and Packet-Flow. The results for the first dataset and binary classification show that the DNN-based model achieved 99.92%, 99.75%, and 94.94% accuracies for Uni-flow, Bi-flow, and Packet-flow, respectively. However, in the case of multi-label classification, these accuracies reduced to 97.08%, 98.12%, and 90.79%, respectively. On the other hand, the proposed DNN model attains the highest accuracy of 97.13% against LSTM and GRUs for the second dataset.
When training a machine learning model, it is standard procedure for the researcher to have full knowledge of both the data and model. However, this engenders a lack of trust between data owners and data scientists. Data owners are justifiably reluctant to relinquish control of private information to third parties. Privacy-preserving techniques distribute computation in order to ensure that data remains in the control of the owner while learning takes place. However, architectures distributed amongst multiple agents introduce an entirely new set of security and trust complications. These include data poisoning and model theft. This paper outlines a distributed infrastructure which is used to facilitate peer-to-peer trust between distributed agents; collaboratively performing a privacy-preserving workflow. Our outlined prototype sets industry gatekeepers and governance bodies as credential issuers. Before participating in the distributed learning workflow, malicious actors must first negotiate valid credentials. We detail a proof of concept using Hyperledger Aries, Decentralised Identifiers (DIDs) and Verifiable Credentials (VCs) to establish a distributed trust architecture during a privacy-preserving machine learning experiment. Specifically, we utilise secure and authenticated DID communication channels in order to facilitate a federated learning workflow related to mental health care data.
Internet of Vehicles (IoV) is an application of the Internet of Things (IoT) network that connects smart vehicles to the internet, and vehicles with each other. With the emergence of IoV technology, customers have placed great attention on smart vehicles. However, the rapid growth of IoV has also caused many security and privacy challenges that can lead to fatal accidents. To reduce smart vehicle accidents and detect malicious attacks in vehicular networks, several researchers have presented machine learning (ML)-based models for intrusion detection in IoT networks. However, a proficient and real-time faster algorithm is needed to detect malicious attacks in IoV. This article proposes a hybrid deep learning (DL) model for cyber attack detection in IoV. The proposed model is based on long short-term memory (LSTM) and gated recurrent unit (GRU). The performance of the proposed model is analyzed by using two datasets—a combined DDoS dataset that contains CIC DoS, CI-CIDS 2017, and CSE-CIC-IDS 2018, and a car-hacking dataset. The experimental results demonstrate that the proposed algorithm achieves higher attack detection accuracy of 99.5% and 99.9% for DDoS and car hacks, respectively. The other performance scores, precision, recall, and F1-score, also verify the superior performance of the proposed framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.