2022
DOI: 10.48550/arxiv.2210.08906
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…. , X0 ) of the history X. Robustness against spurious features in trajectory prediction can be categorized into adversarial and natural robustness [26], [27].…”
Section: Robustness In Trajectory Predictionmentioning
confidence: 99%
“…. , X0 ) of the history X. Robustness against spurious features in trajectory prediction can be categorized into adversarial and natural robustness [26], [27].…”
Section: Robustness In Trajectory Predictionmentioning
confidence: 99%
“…Hence, demanding algorithmic accountability can be understood as a governance function to either proactively avoid the negative impacts of the provision and use of ML systems or to reactively sanction accountable actors, if there have been any adverse effects caused by the systems (Novelli et al 2023). While previous research has extensively studied the requirements that are necessary to achieve the ethical development and use of ML systems (e.g., fairness (Feuerriegel et al 2020), interpretability (Lipton 2018), privacy (Liu et al 2022), robustness (Tocchetti et al 2022)), algorithmic accountability is focused on the demands that shape these requirements and the governance measures that responsible actors can take to fulfill them.…”
Section: Conceptual Foundationsmentioning
confidence: 99%
“…Finally, firms and developers should aim to technically address the well-known sources for negative outcomes of ML systems, such as robustness and safety, bias, and privacy. For each of these issues, researchers and practitioners have already started to propose technical measures to test and protect the algorithmic system against them (e.g., Liu et al 2022;Mehrabi et al 2021;Tocchetti et al 2022). Developers should make use of these to mitigate the unintended detrimental effects of ML systems and to avoid unnecessary accountability demands.…”
Section: Technical Accountabilitymentioning
confidence: 99%