2023
DOI: 10.1016/j.engappai.2023.105837
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Learning for data scarcity in a fatigue damage prognostic problem

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(5 citation statements)
references
References 64 publications
0
5
0
Order By: Relevance
“…The models are then able to produce state-of-the-art results on various downstream tasks by simply training a single layer on top of the pre-trained network for the specific task. Attempts at usage of the paradigm for different time series condition-based tasks are as reported in References 36,46,47. In Reference 36, the pretext task aims to reconstruct data upon masking of some portions using an auto-encoder, before eventual usage for remaining useful life prediction of a machine tool. In References 47,48, contrastive approaches are used for the pretext task whereby similar data samples are grouped closer together whereas diverse ones further apart with the aid of a similarity metric for distance measurement.…”
Section: Background and Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…The models are then able to produce state-of-the-art results on various downstream tasks by simply training a single layer on top of the pre-trained network for the specific task. Attempts at usage of the paradigm for different time series condition-based tasks are as reported in References 36,46,47. In Reference 36, the pretext task aims to reconstruct data upon masking of some portions using an auto-encoder, before eventual usage for remaining useful life prediction of a machine tool. In References 47,48, contrastive approaches are used for the pretext task whereby similar data samples are grouped closer together whereas diverse ones further apart with the aid of a similarity metric for distance measurement.…”
Section: Background and Literature Reviewmentioning
confidence: 99%
“…The models are then able to produce state‐of‐the‐art results on various downstream tasks by simply training a single layer on top of the pre‐trained network for the specific task. Attempts at usage of the paradigm for different time series condition‐based tasks are as reported in References 36,46,47. In Reference 36, the pretext task aims to reconstruct data upon masking of some portions using an auto‐encoder, before eventual usage for remaining useful life prediction of a machine tool.…”
Section: Introductionmentioning
confidence: 99%
“…The crucial rationale for these feasible models includes their significant classification potential in circumstances involving therapeutic diagnosis, maintenance, and production line prognostics. As these two processes formulate a prominent activity in medicine and engineering, the adoption of ML and DL models could contribute to numerous advantages and productivity records [2][3][4][5][6].…”
Section: Introductionmentioning
confidence: 99%
“…Deep learning can reach an accurate data representation. However, it is not applicable in the few-shot learning case [26]. Meta-learning is proposed to improve the performance of deep learning with fewer training samples.…”
Section: Introductionmentioning
confidence: 99%