2021
DOI: 10.3390/electronics10151805
|View full text |Cite
|
Sign up to set email alerts
|

Time Series Segmentation Using Neural Networks with Cross-Domain Transfer Learning

Abstract: Searching for characteristic patterns in time series is a topic addressed for decades by the research community. Conventional subsequence matching techniques usually rely on the definition of a target template pattern and a searching method for detecting similar patterns. However, the intrinsic variability of time series introduces changes in patterns, either morphologically and temporally, making such techniques not as accurate as desired. Intending to improve segmentation performances, in this paper, we prop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 56 publications
0
10
0
2
Order By: Relevance
“…All basic DL modules were stacked three times, including batch normalization and ReLu activation function; finally, an output having the same feature size as the input data was obtained through a fully connected layer. In addition, several models presented in [ 25 , 26 , 27 , 28 , 29 ] that reported high accuracy by applying the CEF method to ECG data were adopted for comparison. As mentioned in Section 2 , the loss is calculated by comparing the output of all models with the same size as the input and the label on the time axis.…”
Section: Resultsmentioning
confidence: 99%
“…All basic DL modules were stacked three times, including batch normalization and ReLu activation function; finally, an output having the same feature size as the input data was obtained through a fully connected layer. In addition, several models presented in [ 25 , 26 , 27 , 28 , 29 ] that reported high accuracy by applying the CEF method to ECG data were adopted for comparison. As mentioned in Section 2 , the loss is calculated by comparing the output of all models with the same size as the input and the label on the time axis.…”
Section: Resultsmentioning
confidence: 99%
“…There is close to no research on applying TL approaches to TSS settings and on analyzing what factors increase or limit the benefits of TL. One of the very few contribution covering the topic showed promising results when applying cross-domain TL to boost medical TSS, even though the scope of the analysis concerning TL is very limited [9]. In general, no authors have ever performed a study focusing on how TL can be used to boost the performance of deep learning TSS models.…”
Section: Of 16mentioning
confidence: 99%
“…While TSS problems might be less prevalent in everyday life of companies and researchers, they still deserve attention due to their importance for fields like human action recognition [26], sleep staging [5], and operational state detection [6]. One contribution had a first look at the topic [9] and successfully showed that feature extraction process of a TSS-CNN model can be improved when pretraining the model with a large, domain-independent source dataset. However, a broad analysis focusing on TL as well as a detailed discussion of the implications of the results was not performed.…”
Section: Transfer Learning For Time Seriesmentioning
confidence: 99%
“…Emerging works focus specifically on biosignal segmentation, e.g., applying neural networks (NN) to ECG signals. In [ 61 ], an NN with transfer learning was used for the segmentation of periodic biosignals (motion and ECG). Convolutional NN has also been found for ECG segmentation.…”
Section: Related Literaturementioning
confidence: 99%