Artificial Intelligence, although at its infancy, is progressing at a fast pace. Its potential applications within the business structure, have led economists and industry analysts to conclude that in the next years, it will become an integral part of the boardroom. This paper examines how AI can be used to augment the decision-making process of the board of directors and the possible legal implications regarding its deployment in the field of company law and corporate governance. After examining the three possible stages of AI use in the boardroom, based on a multidisciplinary approach, the advantages and pitfalls of using AI in the decision-making process are scrutinised. Moreover, AI might be able to autonomously manage a company in the future, whether the legal appointment of the AI as a director is possible and the enforceability of its action is tested. Concomitantly, a change in the corporate governance paradigm is proposed for Smart Companies. Finally, following a comparative analysis on company and securities law, possible adaptations to the current directors’ liability scheme when AI is used to augment the decisions of the board is investigated and future legal solutions are proposed for the legislator.
The potential of artificial intelligence (AI) and its manifold applications have fueled the discussion around how AI can be used to facilitate sustainable objectives. However, the technical, ethical, and legal literature on how AI, including its design, training, implementation, and use can be sustainable, is rather limited. At the same time, consumers incrementally pay more attention to sustainability information, whereas businesses are increasingly engaging in greenwashing practices, especially in relation to digital products and services, raising concerns about the efficiency of the existing consumer protection framework in this regard. The objective of this paper is to contribute to the discussion toward sustainable AI from a legal and consumer protection standpoint while focusing on the environmental and societal pillar of sustainability. After analyzing the multidisciplinary literature available on the topic of the environmentally sustainable AI lifecycle, as well as the latest EU policies and initiatives regarding consumer protection and sustainability, we will examine whether the current consumer protection framework is sufficient to promote sharing and substantiation of sustainability information in B2C contracts involving AI products and services. Moreover, we will assess whether AI-related AI initiatives can promote a sustainable AI development. Finally, we will propose a set of recommendations capable of encouraging a sustainable and environmentally-conscious AI lifecycle while enhancing information transparency among stakeholders, aligning the various EU policies and initiatives, and ultimately empowering consumers.
[Purpose] At the earliest stages in AI lifecycle, training, verification and validation of machine learning and deep learning algorithm require vast datasets that usually contain personal data, which however is not obtained directly from the data subject, while very often the controller is not in a position to identify the data subjects or such identification may result to disproportionate effort. This situation raises the question on how the controller can comply with its obligation to provide information for the processing to the data subjects, especially when proving the information notice is impossible or requires a disproportionate effort. There is little to no guidance on the matter. The purpose of this paper is to address this gap by designing a clear risk-assessment methodology that can be followed by controllers when providing information to the data subjects is impossible or requires a disproportionate effort. [Methodology] After examining the scope of the transparency principle, Article 14 and its proportionality exemption in the training and verification stage of machine learning and deep learning algorithms following a doctrinal analysis, we assess whether already existing tools and methodologies can be adapted to accommodate the GDPR requirement of carrying a balancing test, in conjunction with, or independently of a DPIA. [Findings] Based on an interdisciplinary analysis, comprising theoretical and descriptive material from a legal and technological point of view, we propose a risk-assessment methodology as well as a series of risk-mitigating measures to ensure the protection of the data subject's rights and legitimate interests while fostering the uptake of the technology. [Practical Implications] The proposed balancing exercise and additional measures are designed to facilitate entities training or developing AI, especially SMEs, within and outside of the EEA, that wish to ensure and showcase the data protection compliance of their AI-based solutions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.