2022
DOI: 10.22367/jem.2022.44.18
|View full text |Cite
|
Sign up to set email alerts
|

Stakeholder-accountability model for artificial intelligence projects

Abstract: Aim/purpose – This research presents a conceptual stakeholder accountability model for mapping the project actors to the conduct for which they should be held accountable in artificial intelligence (AI) projects. AI projects differ from other projects in important ways, including in their capacity to inflict harm and impact human and civil rights on a global scale. The in-project decisions are high stakes, and it is critical who decides the system’s features. Even well-designed AI systems can be deployed in wa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 85 publications
0
3
0
Order By: Relevance
“…The principle of beneficence (Cowls et al, 2021) can be used to explore whether the use of ChatGPT promotes the well‐being of consumers, while the principle of non‐maleficence (Jobin et al, 2019) can be used to investigate whether the use of ChatGPT avoids harm to consumers. Theories such as privacy (Bandara et al, 2020), fairness (Jiang, Cao, et al, 2022) and accountability (Miller, 2022) can also be applied to investigate specific ethical concerns related to ChatGPT.…”
Section: Future Research Agendamentioning
confidence: 99%
“…The principle of beneficence (Cowls et al, 2021) can be used to explore whether the use of ChatGPT promotes the well‐being of consumers, while the principle of non‐maleficence (Jobin et al, 2019) can be used to investigate whether the use of ChatGPT avoids harm to consumers. Theories such as privacy (Bandara et al, 2020), fairness (Jiang, Cao, et al, 2022) and accountability (Miller, 2022) can also be applied to investigate specific ethical concerns related to ChatGPT.…”
Section: Future Research Agendamentioning
confidence: 99%
“…Using a Watson Machine Learning service, through AutoAI, a compute plan can be established that chooses a machine learning pathway as well as compute resources, Datasets can be allocated to the compute plan, In the foundation for the AI solution is now in place to begin the AutoAI and Explainable AI (XAI) process [23]. Next, the AutoAI process well provides the columns of the data set with predetermined or best fit data types in place.…”
Section: Step 3: Initialize Autoai In Ibm Watson Cloudmentioning
confidence: 99%
“…Within the AutoAI, experiment prediction settings allow the user to focus on possible prediction types such as binary classification, multi-class classification, and regression. The binary classification will classify data into two distinct categories within the column, where multiclass classification allows for multiple distinct categories, and finally, regression allows for a continuous set of values along with a large range of possible outcomes [23]. Within this section of the experiment settings, the user can choose the optimization metric for which the test across multiple algorithms will be initially judged and reported.…”
Section: Step 4: Moving Towards Explainable Ai (Xai)mentioning
confidence: 99%