2023
DOI: 10.1038/s41598-023-43856-7
|View full text |Cite
|
Sign up to set email alerts
|

An eXplainable Artificial Intelligence analysis of Raman spectra for thyroid cancer diagnosis

Loredana Bellantuono,
Raffaele Tommasi,
Ester Pantaleo
et al.

Abstract: Raman spectroscopy shows great potential as a diagnostic tool for thyroid cancer due to its ability to detect biochemical changes during cancer development. This technique is particularly valuable because it is non-invasive and label/dye-free. Compared to molecular tests, Raman spectroscopy analyses can more effectively discriminate malignant features, thus reducing unnecessary surgeries. However, one major hurdle to using Raman spectroscopy as a diagnostic tool is the identification of significant patterns an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
6
2
1

Relationship

3
6

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 63 publications
0
5
0
Order By: Relevance
“…The algorithm learns to make predictions or decisions based on input data by generalizing from the labeled examples it has seen during training. Moreover, the XAI algorithm “SHapley Additive exPlanations” (SHAP) will be used to detect for each patient, which features are more important for the ML algorithm in its classification ( Bellantuono et al, 2023 ; Novielli et al, 2023 ). SHAP is an algorithm used in machine learning to explain the predictions made by complex models, particularly for models like XGBoost, Random Forest, neural networks, and others.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The algorithm learns to make predictions or decisions based on input data by generalizing from the labeled examples it has seen during training. Moreover, the XAI algorithm “SHapley Additive exPlanations” (SHAP) will be used to detect for each patient, which features are more important for the ML algorithm in its classification ( Bellantuono et al, 2023 ; Novielli et al, 2023 ). SHAP is an algorithm used in machine learning to explain the predictions made by complex models, particularly for models like XGBoost, Random Forest, neural networks, and others.…”
Section: Discussionmentioning
confidence: 99%
“…Feature Reduction : Feature reduction is a crucial aspect of our approach. Techniques such as Principal Component Analysis (PCA) ( Song et al, 2010 ) and wrapper methods like Boruta ( Kursa et al, 2010 ; Bellantuono et al, 2023 ) will be employed. These methods effectively reduce the dimensionality of the feature space, allowing us to train models even with a limited number of instances.…”
Section: Methods and Analysismentioning
confidence: 99%
“…This framework intricately involves data preprocessing, feature extraction, feature selection, and model construction, employing an array of ML algorithms. To ensure transparency and interpretability in our study, we embrace the principles of Explainable Artificial Intelligence (XAI) (Lombardi et al, 2021a , b ; Bellantuono et al, 2023 ; Novielli et al, 2023 ). XAI not only enhances the trustworthiness of our models but also empowers clinicians to understand the rationale behind each prediction.…”
Section: Introductionmentioning
confidence: 99%
“…Despite the erroneous complexity of the data, the model was still able to yield very reliable outcomes. Interpretability 41 and sensitivity 42 analysis approaches were employed to ensure that the model’s predictions were explainable and therefore most practically useful in medical practice 43 , thus showing the distinct contributions of each input domain to the predictive framework.…”
Section: Introductionmentioning
confidence: 99%