Artificially intelligent systems should make users' lives easier and support them in complex decisions or even make these decisions completely autonomously. However, at the time of writing, the processes and decisions in an intelligent system are usually not transparent for users. They do not know which data are used, for which purpose, and with what consequences. There is simply a lack of transparency, which is important for trust in intelligent systems. Transparency and traceability of decisions is usually subordinated to performance and accuracy in AI development, or sometimes it plays no role at all. In this chapter, the authors describe what intelligent systems are and explain how users can be supported in specific situations using a context-based adaptive system. In this context, the authors describe the challenges and problems of intelligent systems in creating transparency for users and supporting their sovereignty. The authors then show which ethical and legal requirements intelligent systems have to meet and how existing approaches respond to them.