2022
DOI: 10.1109/tts.2022.3195114
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Trustworthy AI in Times of COVID-19: Deep Learning for Predicting a Multiregional Score Conveying the Degree of Lung Compromise in COVID-19 Patients

Abstract: The paper's main contributions are twofold: to demonstrate how to apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare; and to investigate the research question of what does "trustworthy AI" mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(16 citation statements)
references
References 31 publications
0
7
0
Order By: Relevance
“…Consumers surveyed by MIT AGELAB indicated “little to some willingness to trust a diagnosis and follow a treatment plan developed by AI, allow a medical professional to use AI for recording data and as a decision support tool, use in-home monitoring on the health issues of their own, and trust an AI prediction on potential health issues and life expectancy” (MIT AGELAB, 2021 ). Also the medical practitioners are often sceptical or reluctant to rely on AI-delivered diagnosis (Allahabadi et al, 2022 ). Moreover, similar to the problems with non-representative datasets for training of machine learning models for face or emotion recognition discussed above, the datasets used for training diagnostic models also suffer from lack of proper representation in terms of age, as was shown in a study of a diagnostic model for detection of lung compromise in COVID-19 patients (Allahabadi et al, 2022 ).…”
Section: Automatic Decision-making Systems (Adms) In Healthcarementioning
confidence: 99%
See 1 more Smart Citation
“…Consumers surveyed by MIT AGELAB indicated “little to some willingness to trust a diagnosis and follow a treatment plan developed by AI, allow a medical professional to use AI for recording data and as a decision support tool, use in-home monitoring on the health issues of their own, and trust an AI prediction on potential health issues and life expectancy” (MIT AGELAB, 2021 ). Also the medical practitioners are often sceptical or reluctant to rely on AI-delivered diagnosis (Allahabadi et al, 2022 ). Moreover, similar to the problems with non-representative datasets for training of machine learning models for face or emotion recognition discussed above, the datasets used for training diagnostic models also suffer from lack of proper representation in terms of age, as was shown in a study of a diagnostic model for detection of lung compromise in COVID-19 patients (Allahabadi et al, 2022 ).…”
Section: Automatic Decision-making Systems (Adms) In Healthcarementioning
confidence: 99%
“…Also the medical practitioners are often sceptical or reluctant to rely on AI-delivered diagnosis (Allahabadi et al, 2022 ). Moreover, similar to the problems with non-representative datasets for training of machine learning models for face or emotion recognition discussed above, the datasets used for training diagnostic models also suffer from lack of proper representation in terms of age, as was shown in a study of a diagnostic model for detection of lung compromise in COVID-19 patients (Allahabadi et al, 2022 ).…”
Section: Automatic Decision-making Systems (Adms) In Healthcarementioning
confidence: 99%
“…The analytical focus would be on the five US principles. However, previous assessment based on the EU Ethics Guidelines for Trustworthy AI has shown that the level of granularity and specificity offered by the seven requirements is very useful, and that the requirements are more closely related to the context of AI tools than the principles alone (e.g., Zicari, Ahmed, et al, 2021; Zicari, Brodersen, et al, 2021; Zicari, Brusseau, et al, 2021; Allahabadi, et al, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…Z-Inspection, introduced by Zicari et al ( 31 ), is one such approach to assessing trustworthy AI, which follows the guidelines established by the HLEG. It consists of three phases, the set up phase, the assess phase, and the resolve phase.The use of this comprehensive protocol has been demonstrated in a number of case studies including predicting the risk of a cardiovascular heart disease ( 31 ), machine learning as a supportive tool to recognize cardiac arrest in emergency calls ( 27 ), deep learning for skin lesion classification ( 32 ), and deep learning system to aid radiologists in estimating and communicating the degree of damage in a patient’s lung as a result of COVID-19 ( 33 ). Other recent approaches for assessing whether an AI system is trustworthy includes the trustworthy artificial intelligence implementation (TAII) framework ( 34 ), the assessment list for trustworthy AI applications produced by the HLEG ( 35 ), and a checklist proposed by Scott et al which includes 10 questions for clinicians to ask when assessing the viability of ML approaches for use in practice ( 36 ).…”
Section: Principles Of Trustworthy Machine Learning Systemsmentioning
confidence: 99%