2023
DOI: 10.37190/e-inf230101
|View full text |Cite
|
Sign up to set email alerts
|

Governance in Ethical, Trustworthy AI Systems: Extension of the ECCOLA Method for AI Ethics Governance Using GARP

Abstract: Background: The continuous development of artificial intelligence (AI) and increasing rate of adoption by software startups calls for governance measures to be implemented at the design and development stages to help mitigate AI governance concerns. Most AI ethical design and development tools mainly rely on AI ethics principles as the primary governance and regulatory instrument for developing ethical AI that inform AI governance. However, AI ethics principles have been identified as insufficient for AI gover… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…However, the principles may not be solid enough to deal with developing AI technologies. Information robustness, 10.3389/fpsyg.2023.1061778 Frontiers in Psychology 13 frontiersin.org back-up governance measures, and adaptive governance policies may be necessary to govern the complicated AI ethical problems in education (Agbese et al, 2023).…”
Section: Future Research Agendamentioning
confidence: 99%
“…However, the principles may not be solid enough to deal with developing AI technologies. Information robustness, 10.3389/fpsyg.2023.1061778 Frontiers in Psychology 13 frontiersin.org back-up governance measures, and adaptive governance policies may be necessary to govern the complicated AI ethical problems in education (Agbese et al, 2023).…”
Section: Future Research Agendamentioning
confidence: 99%
“…Moreover, they are expected to follow ethical norms, values and standards (Koniakou, 2023). Practitioners ought to be trustworthy, diligent and accountable in how they handle their intellectual capital and other resources including their information technologies, finances as well as members of staff, in order to overcome challenges, minimize uncertainties, risks and any negative repercussions (e.g., decreased human oversight in decision making, among others) (Agbese et al, 2023; Smuha, 2019).…”
Section: Artificial Intelligence Governancementioning
confidence: 99%
“…AI models have to prevent such contingent issues from happening. Their developers' responsibilities are to improve the robustness of their automated systems, and to make them as secure of possible, to reduce the chances of threats, including by inadvertent irregularities, information leakages, as well as by privacy violations like data breaches, contamination and poisoning by malicious actors (Agbese et al, 2023; Hamon et al, 2020).…”
Section: Artificial Intelligence Governancementioning
confidence: 99%