2022
DOI: 10.48550/arxiv.2202.13486
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Architectural Optimization and Feature Learning for High-Dimensional Time Series Datasets

Abstract: As our ability to sense increases, we are experiencing a transition from data-poor problems, in which the central issue is a lack of relevant data, to data-rich problems, in which the central issue is to identify a few relevant features in a sea of observations. Motivated by applications in gravitational-wave astrophysics, we study a problem in which the goal is to predict the presence of transient noise artifacts in a gravitational wave detector from a rich collection of measurements from the detector and its… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(18 citation statements)
references
References 43 publications
0
17
0
Order By: Relevance
“…Notably, the models discussed here are explicitly tuned to learn to ignore input channels that are not useful for making its predictions-the vast majority of them, in this setting. Such a practice has previously been demonstrated (e.g., in [42,43]) to improve both interpretability and accuracy. These methods achieve excellent performance while ignoring all but a few dozen to few hundred channels, which can then be further investigated manually by detector domain experts.…”
Section: Feature Learning For Transient Anomaliesmentioning
confidence: 99%
See 4 more Smart Citations
“…Notably, the models discussed here are explicitly tuned to learn to ignore input channels that are not useful for making its predictions-the vast majority of them, in this setting. Such a practice has previously been demonstrated (e.g., in [42,43]) to improve both interpretability and accuracy. These methods achieve excellent performance while ignoring all but a few dozen to few hundred channels, which can then be further investigated manually by detector domain experts.…”
Section: Feature Learning For Transient Anomaliesmentioning
confidence: 99%
“…Machine learning models have been demonstrated in recent works [42,43] to be able to learn to predict the presence or absence of glitches in high-dimensional gravitational-wave astronomy data by considering only auxiliary channel information, without looking at the gravitational-wave strain data stream in which the glitches themselves appear. In this section, we describe one such model initially proposed in [43], which we refer to as LF. The model is highly interpretable and wellsuited to efficient strain-independent glitch detection, offering several useful features including:…”
Section: Feature Learning For Transient Anomaliesmentioning
confidence: 99%
See 3 more Smart Citations