2021 IEEE 9th International Conference on Smart Energy Grid Engineering (SEGE) 2021
DOI: 10.1109/sege52446.2021.9534987
|View full text |Cite
|
Sign up to set email alerts
|

Sampling Strategy Analysis of Machine Learning Models for Energy Consumption Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(18 citation statements)
references
References 10 publications
0
18
0
Order By: Relevance
“…The emotional features involved in this chapter are visual signal features and four physiological signal features. After feature extraction [ 30 32 ], the visual signal features and four physiological signal features are combined in series to form four multimodal features.…”
Section: Methodsmentioning
confidence: 99%
“…The emotional features involved in this chapter are visual signal features and four physiological signal features. After feature extraction [ 30 32 ], the visual signal features and four physiological signal features are combined in series to form four multimodal features.…”
Section: Methodsmentioning
confidence: 99%
“…This study also compared the proposed bearing fault diagnosis method with some traditional algorithms [ 28 , 29 ] including support vector machine, BP neural network, and WPT—convolutional neural network. Finally, the effectiveness of the proposed diagnosis method was verified through the comparison.…”
Section: Introductionmentioning
confidence: 99%
“…As a result, investigating an effective unbalanced data classification technology [7] is extremely important. K-nearest neighbor (KNN) method has become one of the most famous algorithms in the field of pattern recognition [8][9][10] and statistics because of its simple algorithm, easy realization, no need to estimate parameters, and high classification accuracy, and it is also one of the earliest nonparametric algorithms applied to automatic text classification in machine learning [11,12]. However, KNN method needs to store all the training sample data in the process of calculating the nearest neighbor of each sample to be tested, which leads to a large number of similarity calculations for classification and significantly increases the complexity of classification calculation with the increase of sample data set [13], thus reducing the classification efficiency.…”
Section: Introductionmentioning
confidence: 99%