Technologies Logicielles Architectures Des Systèmes 2022
DOI: 10.51257/a-v1-h5030
|View full text |Cite
|
Sign up to set email alerts
|

Explicabilité en Intelligence Artificielle ; vers une IA Responsable - Instanciation dans le domaine de la santé

Abstract: Essentielle pour une adoption efficace comme pour une utilisation avisée et objective de l'Intelligence Artificielle (IA), l'explicabilité est un véritable verrou de l'évolution de ces technologies, en particulier concernant l'apprentissage automatique et profond. Sans une réelle explicabilité des algorithmes proposés, ces technologies resteront une boîte noire pour les professionnels de santé (et pas seulement), chercheurs, ingénieurs, techniciens -qui assument (et vont continuer à assumer) la pleine responsa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

1
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 23 publications
1
2
0
Order By: Relevance
“…To overcome this problem, GradCAM++ and its variant XGrad-CAM were designed, with the former using second-order gradients to obtain activations that are independent of the object size, and the latter scaling the gradients by the normalized activation maps. 7 Thus, in this study, the results and application of DL, as well as explainable steps, are encouraging. However, there needs to be further consolidation in future experiments to establish the potential of this fast, noninvasive tool for routine applications in a clinical context as well as in-depth analysis to verify the explainable features as deter- mined by the DL model using techniques such as GradCAM++ or XGradCAM.…”
supporting
confidence: 58%
See 1 more Smart Citation
“…To overcome this problem, GradCAM++ and its variant XGrad-CAM were designed, with the former using second-order gradients to obtain activations that are independent of the object size, and the latter scaling the gradients by the normalized activation maps. 7 Thus, in this study, the results and application of DL, as well as explainable steps, are encouraging. However, there needs to be further consolidation in future experiments to establish the potential of this fast, noninvasive tool for routine applications in a clinical context as well as in-depth analysis to verify the explainable features as deter- mined by the DL model using techniques such as GradCAM++ or XGradCAM.…”
supporting
confidence: 58%
“…Another consequence is that the localization often does not correspond to the whole object, but to parts of it. To overcome this problem, GradCAM++ and its variant XGradCAM were designed, with the former using second-order gradients to obtain activations that are independent of the object size, and the latter scaling the gradients by the normalized activation maps . Thus, in this study, the results and application of DL, as well as explainable steps, are encouraging.…”
mentioning
confidence: 78%
“…From a technical point of view, one major disadvantage of commercial software is the user's lack of traceability, interpretability, and explainability on the trained deep learning models [29]. Therefore we started testing in-house traceable tools for NFT and NPs segmentation and detection [13].…”
Section: Tailored Open-source and Explainable Algorithmsmentioning
confidence: 99%