2023
DOI: 10.1109/tnnls.2021.3107049
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Iterative Approach to Explainable Feature Learning

Abstract: This article introduces a new iterative approach to explainable feature learning. During each iteration, new features are generated, first by applying arithmetic operations on the input set of features. These are then evaluated in terms of probability distribution agreements between values of samples belonging to different classes. Finally, a graph-based approach for feature selection is proposed, which allows for selecting high-quality and uncorrelated features to be used in feature generation during the next… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 107 publications
(116 reference statements)
0
7
0
Order By: Relevance
“…Our method processes, e.g., 200 features in 5 seconds, but for larger input sets, it makes sense to preprocess it with some faster filtering. We use our efficient and reliable graph cut-based feature selection [58], summarised in Subsection 3.1. In Subsection 3.2, we discuss the idea of using DP and the encountered difficulties and introduce an iterative suboptimal alternating solution, where the order of feature processing is inverted in each iteration.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Our method processes, e.g., 200 features in 5 seconds, but for larger input sets, it makes sense to preprocess it with some faster filtering. We use our efficient and reliable graph cut-based feature selection [58], summarised in Subsection 3.1. In Subsection 3.2, we discuss the idea of using DP and the encountered difficulties and introduce an iterative suboptimal alternating solution, where the order of feature processing is inverted in each iteration.…”
Section: Methodsmentioning
confidence: 99%
“…This section describes a graph cut-based feature selection approach, presented in [58] that allows for extracting a subset of high-quality dissimilar features. Depending on the defined feature estimation measurement, it can be used for classification and regression purposes.…”
Section: Graph Cut-based Feature Selectionmentioning
confidence: 99%
See 2 more Smart Citations
“…Despite CNNs' extraordinary performance, they still lack a clear interpretation of the inner mechanism [10,42,21]. This lack of transparency can indeed be a disqualifying factor in some peculiar scenarios where mistakes in interpretation can jeopardize human life and health, like in medical image processing or autonomous vehicles [20,38,34,32]. Therefore, it is highly desirable to find a way to understand and explain what exactly CNNs have learned during the training process [26,31,16].…”
Section: Introductionmentioning
confidence: 99%