The internet-of-Vehicle (IoV) can facilitate seamless connectivity between connected vehicles (CV), autonomous vehicles (AV), and other IoV entities. Intrusion Detection Systems (IDSs) for IoV networks can rely on machine learning (ML) to protect the in-vehicle network from cyber-attacks. Blockchainbased Federated Forests (BFFs) could be used to train ML models based on data from IoV entities while protecting the confidentiality of the data and reducing the risks of tampering with the data. However, ML models are still vulnerable to evasion, poisoning and exploratory attacks by adversarial examples. The BFF-IDS offers partial defence against poisoning but has no measure for evasion attacks, the most common attack/threat faced by ML models. Besides, the impact of adversarial examples transferability in CAN IDS has largely remained untested. This paper investigates the impact of various possible adversarial examples on the BFF-IDS. We also investigated the statistical adversarial detector's effectiveness and resilience in detecting the attacks and subsequent countermeasures by augmenting the model with detected samples. Our investigation results established that BFF-IDS is very vulnerable to adversarial examples attacks. The statistical adversarial detector and the subsequent BFF-IDS augmentation (BFF-IDS(AUG)) provide an effective mechanism against the adversarial examples. Consequently, integrating the statistical adversarial detector and the subsequent BFF-IDS augmentation with the detected adversarial samples provides a sustainable security framework against adversarial examples and other unknown attacks.
INDEX TERMSAdversarial examples, artificial intelligent (AI), blockchain, controller area network (CAN), federated learning, intrusion detection system (IDS).