2017
DOI: 10.1007/978-3-319-71246-8_30
|View full text |Cite
|
Sign up to set email alerts
|

Cost Sensitive Time-Series Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
15
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(15 citation statements)
references
References 19 publications
0
15
0
Order By: Relevance
“…As in LTSfAUC, existing shapelet methods such as Refs. 1,3 that do not optimize the pAUC cannot discover shapelets that discriminate minor negative instances. As a result, their pAUC also degrades.…”
Section: Ltspauc 393mentioning
confidence: 99%
See 2 more Smart Citations
“…As in LTSfAUC, existing shapelet methods such as Refs. 1,3 that do not optimize the pAUC cannot discover shapelets that discriminate minor negative instances. As a result, their pAUC also degrades.…”
Section: Ltspauc 393mentioning
confidence: 99%
“…2 In recent years, several learning-based shapelet methods have been proposed. 1,3 These learning-based methods use stochastic gradient descent (SGD) algorithms to learn both shapelets and classifiers. Thus, learned shapelets are not restricted to being subseries in training timeseries instances.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In data manipulation level [2,10,[13][14][15], time-series datasets were re-established through over-sampling of positive samples, or under-sampling of negative samples, or both. In algorithmic modification level [16], classifiers were modified by predefining higher costs or class weights for false positive samples. However, there were some problems of those two levels approaches needed to be noticed.…”
Section: Introductionmentioning
confidence: 99%
“…Algorithmic modification approaches need predefining the cost weight or cost matrix and the exact settings are difficultly found. Besides, most of two levels methods applied algorithms like KNN-DTM [17], SVM [13,15], Shaplets [16]. Those classic algorithms need heavy hand-crafted works on data preprocessing or feature engineering and they are not appropriate for large volume dataset.…”
Section: Introductionmentioning
confidence: 99%