2018
DOI: 10.1109/msp.2018.2856632
|View full text |Cite
|
Sign up to set email alerts
|

Utility Metrics for Assessment and Subset Selection of Input Variables for Linear Estimation [Tips & Tricks]

Abstract: This tutorial paper introduces the utility metric and its generalizations, which allow for a 'quick-anddirty' quantitative assessment of the relative importance of the different input variables in a linear estimation model. In particular, we show how these metrics can be cheaply calculated, thereby making them very attractive for model interpretation, online signal quality assessment, or greedy variable selection. The main goal of this paper is to provide a transparent and consistent framework that consolidate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
64
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
3

Relationship

6
0

Authors

Journals

citations
Cited by 24 publications
(64 citation statements)
references
References 12 publications
0
64
0
Order By: Relevance
“…In the remaining of this paper, we will refer to this method as the decoder magnitude-based (DMB) greedy method or DMB-G. However, in [21] it was argued that the magnitude of the entries in the decoderŵ do not necessarily reflect the importance of the corresponding channel as it is scaling dependent and it does not properly take interactions across channels into account. Instead, it was argued to quantify the importance or 'utility' of a channel k by the increase in the least squared error (LSE) if channel k were to be removed and the decoder would be fully re-optimized.…”
Section: B Greedy Channel Selectionmentioning
confidence: 99%
“…In the remaining of this paper, we will refer to this method as the decoder magnitude-based (DMB) greedy method or DMB-G. However, in [21] it was argued that the magnitude of the entries in the decoderŵ do not necessarily reflect the importance of the corresponding channel as it is scaling dependent and it does not properly take interactions across channels into account. Instead, it was argued to quantify the importance or 'utility' of a channel k by the increase in the least squared error (LSE) if channel k were to be removed and the decoder would be fully re-optimized.…”
Section: B Greedy Channel Selectionmentioning
confidence: 99%
“…To select channels we used the utility metric (Bertrand, 2018), which quantifies the effective loss, i.e., the increase in the LS cost, if a group of columns (corresponding to one channel or a set of channels and all their τ − 1 corresponding time-shifted version) would be removed and if the model (1) would be reoptimized afterwards:…”
Section: Channel Selectionmentioning
confidence: 99%
“…Note that a naive implementation of computing U g would require solving one LS squares problem like (1), for each possible removal of a candidate group, which would lead to a large computational cost for problems with large dimensions and/or involving a large number of groups. Fortunately, this can be circumvented, as shown by Bertrand (2018), with a final computational complexity that scales linearly in the number of groups, given the solution of (1) when none of the channels are removed. The basic workflow for finding the best k groups of EEG channels can be summarized as follows (Narayanan and Bertrand, 2019): we compute the utility metric for each of the groups and remove the group with the lowest utility.…”
Section: Channel Selectionmentioning
confidence: 99%
See 2 more Smart Citations