2022
DOI: 10.21203/rs.3.rs-2355147/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FDA approved Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices: An updated 2022 landscape

Abstract: As artificial intelligence (AI) has been highly advancing in the last decade, machine learning (ML) based medical devices are increasingly used in healthcare. In this article, we did an extensive search on FDA database and performed analysis of FDA approved Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. We have presented all the listed AI/ML-Enabled Medical Devices according to the date of approval, medical specialty, implementation modality of Medical Devices, and anatomical sit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(1 citation statement)
references
References 4 publications
0
1
0
Order By: Relevance
“…One team explained that “a desire to ensure we had an interpretable model further influenced our choice to pursue regression rather than tree-based models ( Engstrom et al ).” The other team explained that “most AI models that operate as “black-box models” are unsuitable for mission-critical domains, such as healthcare, because they pose risk scenarios where problems that occur can remain masked and therefore undetectable and unfixable” ( Harris et al ). This perspective offers a contrasting view from prior work examining the use of “black-box models” in clinical care ( 17 ), the limitations of current explainability methods ( 18 ), and the approach of regulators at the U.S. Food and Drug Administration ( 19 ). The research topic exposes the urgent need for research and policies that help organizations understand whether or not to prioritize AI software interpretability and explainability.…”
Section: Discussionmentioning
confidence: 99%
“…One team explained that “a desire to ensure we had an interpretable model further influenced our choice to pursue regression rather than tree-based models ( Engstrom et al ).” The other team explained that “most AI models that operate as “black-box models” are unsuitable for mission-critical domains, such as healthcare, because they pose risk scenarios where problems that occur can remain masked and therefore undetectable and unfixable” ( Harris et al ). This perspective offers a contrasting view from prior work examining the use of “black-box models” in clinical care ( 17 ), the limitations of current explainability methods ( 18 ), and the approach of regulators at the U.S. Food and Drug Administration ( 19 ). The research topic exposes the urgent need for research and policies that help organizations understand whether or not to prioritize AI software interpretability and explainability.…”
Section: Discussionmentioning
confidence: 99%