This work is a result of a collaboration between the projects "KI-Flex" (project number 16ES1027), funded by the German Federal Ministry of Education and Research (BMBF) within the founding program Microelectronic from Germany innovation driver, the "KoSi" project, (project number 01MM20011C) funded by the German Federal Ministry for Digital and Transport (BMDV) within the funding program "Automatisiertes, Vernetztes Fahren" and the project "TEACHING" (project number 871385) founded by the Horizon 2020 program."ABSTRACT As more and more applications rely on Artificial Intelligence (AI), it is inevitable to explore the associated safety and security risks, especially for sensitive applications where physical integrity is at risk. One of the most interesting challenges that come with AI are adversarial attacks being a wellresearched problem in the visual domain, where a small change in the input data can cause the Neural Network (NN) to make an incorrect prediction. In the radar domain, AI is not that widespread yet but the results that AI applications produce are very promising, which is why more and more applications based on it are being used. This work presents three possible attack methods that are particularly suitable for the radar domain. The developed algorithms generate universal adversarial attack patches for all sorts of radar applications based on NN. The main goal of the algorithms, apart from the computation of universal patches, is the identification of sensitive areas in the raw radar data input which than can be examined more closely. To the best of our knowledge, this is the first work that deals with calculating universal patches on raw radar data, which is of great importance especially for interference analysis. The developed algorithms have been verified on two data sets. One in the field of autonomous driving where the attacks lead to a steering misprediction of up to 0.3 for the steering value which is within [-1,1], with the results also being successfully tested on a demonstrator. The other data set originated from a gesture recognition task, where the attacks decreased the accuracy, originally at 97.0% up to a minimum of 16.5%, which is slightly above 12.5% being the accuracy for a purely random prediction.INDEX TERMS adversarial attacks, artificial neural networks, autonomous vehicles, edge computing, object recognition, radar applications, real-time systems