Adversarial examples have emerged as a key threat for machine-learningbased systems, especially the ones that employ deep neural networks. Unlike a large body of research in this area, this Keynote article accounts for the semantic, context, and specifications of the complete system with machine learning components in resource-constrained environments.-Muhammad Shafique, Technische Universität Wien Machine learning (Ml) algorithms, fueled by massive amounts of data, are increasingly being utilized in several domains, including healthcare, finance, and transportation. Models produced by ML algorithms, especially deep neural networks (DNNs), are being deployed in domains where trustworthiness is a big concern, such as automotive systems [1], finance [2], healthcare [3], and cyber security [4]. Of particular concern is the use of ML (including deep learning) in cyber-physical systems (CPSs) [5], such as autonomous vehicles, where the presence of an adversary can cause serious consequences. However, in designing and deploying these algorithms in critical CPSs, the presence of an active adversary is often ignored.