In the era of digitalization, IT landscapes keep growing along with complexity and dependencies. This amplifies the need to determine the current elements of an IT landscape for the management and planning of IT landscapes as well as for failure analysis. The field of enterprise architecture documentation sought for more than a decade for solutions to minimize the manual effort to build enterprise architecture models or automation. We summarize the approaches presented in the last decade in a literature survey. Moreover, we present a novel, machine-learning based approach to detect and to identify applications in an IT landscape.
Reinforcement learning (RL) has achieved tremendous progress in solving various sequential decision-making problems, e.g., control tasks in robotics. However, RL methods often fail to generalize to safety-critical scenarios since policies are overfitted to training environments. Previously, robust adversarial reinforcement learning (RARL) was proposed to train an adversarial network that applies disturbances to a system, which improves robustness in test scenarios. A drawback of neural-network-based adversaries is that integrating system requirements without handcrafting sophisticated reward signals is difficult. Safety falsification methods allow one to find a set of initial conditions as well as an input sequence, such that the system violates a given property formulated in temporal logic. In this paper, we propose falsification-based RARL (FRARL), the first generic framework for integrating temporal-logic falsification in adversarial learning to improve policy robustness. By using our falsification method, we do not need to construct an extra reward function for the adversary. We evaluate our approach on a braking assistance system and an adaptive cruise control system of autonomous vehicles. Experiments show that policies trained with a falsificationbased adversary generalize better and show less violation of the safety specification in test scenarios than the ones trained without an adversary or with an adversarial network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.