2018
DOI: 10.1007/978-3-030-02925-8_9
|View full text |Cite
|
Sign up to set email alerts
|

A Framework for Processing Cumulative Frequency Queries over Medical Data Streams

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…We choose the methods, Random [8], SMOTE [7], BSmote [9], SSmote [10], ADASYN [11], MWMOTE [35], Cluster-Smote [38], A-SUWO [40], Kmeans-Smote [12], Gaussian-Smote [37], Trim-Smote [41] as reference methods to compare them. We chose the same number of nearest minority neighbors (k) [5, 10 ,15] for all methods except FCM smote, Kmeans Smote, Cluster Smote and RNNFCM-SMOTE we chose same number of nearest minority neighbors (k) and number of cluster (knn) (5,5), (5,10), (10,5), (10,10). To evaluate the performance of the baseline over-sampling methods, we use KNN [13], FKNN [15], DTREE [14], FTREE [16], FKNCN [42], BM-FKNN [42], BM-FKNCN [42], FSVM [27], SVM [43] classifiers.…”
Section: Experimentationmentioning
confidence: 99%
See 1 more Smart Citation
“…We choose the methods, Random [8], SMOTE [7], BSmote [9], SSmote [10], ADASYN [11], MWMOTE [35], Cluster-Smote [38], A-SUWO [40], Kmeans-Smote [12], Gaussian-Smote [37], Trim-Smote [41] as reference methods to compare them. We chose the same number of nearest minority neighbors (k) [5, 10 ,15] for all methods except FCM smote, Kmeans Smote, Cluster Smote and RNNFCM-SMOTE we chose same number of nearest minority neighbors (k) and number of cluster (knn) (5,5), (5,10), (10,5), (10,10). To evaluate the performance of the baseline over-sampling methods, we use KNN [13], FKNN [15], DTREE [14], FTREE [16], FKNCN [42], BM-FKNN [42], BM-FKNCN [42], FSVM [27], SVM [43] classifiers.…”
Section: Experimentationmentioning
confidence: 99%
“…Numerous health diagnostic applications employ classification algorithms [1,2], in which the classification engines are aimed to be utilized with equilibrated data sets; however, the actual data collections in the real world are generally disequilibrated [3]. These datasets can have many examples matching a majority class and a few percent corresponding to a lesser class, while the primary objective of health discovery systems is to detect these rare instances, who are commonly regarded as interesting or abnormal cases [4,5], e.g., for forecasting breast cancer and diabetes cases. In these circumstances, health diagnostic devices may not be capable of accurately identifying samples of interest since they are skewed towards a majority category of samples, e.g.…”
Section: Introductionmentioning
confidence: 99%