Feature extraction and selection from medical data are the basis of radiomics and image biomarker discovery for various architectures, including convolutional neural networks (CNNs). We herein describe the typical radiomics steps and the components of a CNN for both deep feature extraction and end-to-end approaches. We discuss the curse of dimensionality, along with dimensionality reduction techniques. Despite the outstanding performance of deep learning (DL) approaches, the use of handcrafted features instead of deep learned features needs to be considered for each specific study. Dataset size is a key factor: large-scale datasets with low sample diversity could lead to overfitting; limited sample sizes can provide unstable models. The dataset must be representative of all the “facets” of the clinical phenomenon/disease investigated. The access to high-performance computational resources from graphics processing units is another key factor, especially for the training phase of deep architectures. The advantages of multi-institutional federated/collaborative learning are described. When large language models are used, high stability is needed to avoid catastrophic forgetting in complex domain-specific tasks. We highlight that non-DL approaches provide model explainability superior to that provided by DL approaches. To implement explainability, the need for explainable AI arises, also through post hoc mechanisms.
Relevance statement
This work aims to provide the key concepts for processing the imaging features to extract reliable and robust image biomarkers.
Key Points
The key concepts for processing the imaging features to extract reliable and robust image biomarkers are provided.
The main differences between radiomics and representation learning approaches are highlighted.
The advantages and disadvantages of handcrafted versus learned features are given without losing sight of the clinical purpose of artificial intelligence models.
Graphical Abstract