2019
DOI: 10.1016/j.measurement.2018.10.001
|View full text |Cite
|
Sign up to set email alerts
|

Fault diagnosis of self-aligning troughing rollers in belt conveyor system using k-star algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(17 citation statements)
references
References 21 publications
0
17
0
Order By: Relevance
“…Initially, 10 MLs were scanned using all protein descriptors as explanatory variables, while the response variable was set to the presence or absence of ACs. The screened learners were: K-Star [27], Locally weighted learning (LWL) [28,29], Logistic Model Trees (LMT) [30,31], LogitBoost [32], Support Vector Machine (SVM) [33,34], Naïve Bayes [35], Random Forests [36], Probabilistic Neural Network (PNN) [37,38], Xgboost [39] and k-nearest neighbors (k-NN) [40]. The MLs were implemented as corresponding KNIME 4.3.3 nodes with default parameters.…”
Section: Screening Machine Learners (Mls)mentioning
confidence: 99%
See 1 more Smart Citation
“…Initially, 10 MLs were scanned using all protein descriptors as explanatory variables, while the response variable was set to the presence or absence of ACs. The screened learners were: K-Star [27], Locally weighted learning (LWL) [28,29], Logistic Model Trees (LMT) [30,31], LogitBoost [32], Support Vector Machine (SVM) [33,34], Naïve Bayes [35], Random Forests [36], Probabilistic Neural Network (PNN) [37,38], Xgboost [39] and k-nearest neighbors (k-NN) [40]. The MLs were implemented as corresponding KNIME 4.3.3 nodes with default parameters.…”
Section: Screening Machine Learners (Mls)mentioning
confidence: 99%
“…Classi cation with K* is made by summing the probabilities from the new instance to all of the members of a category [27]. On the other hand, eXtreme Gradient Boosting (XGBoost, or XGB) is a tree-based standardized ensemble method that relies on an ensemble of weak decision tree (DT)-type models to create new subsequent boosted DT-type models of a reduced loss function.…”
Section: Machine Learningmentioning
confidence: 99%
“…Even different theories can absorb the advantages of other ways to improve their algorithms and innovate. Ravikumar et al [13] applied the decision tree method to find the significant features for classification of self-aligning troughing rollers' faults, and the k-star algorithm was applied for fault diagnosis to raise the fault classification accuracy. Arup et al [14] presented an automated tool called the Manufacturing Process Failure Diagnosis Tool (MPFDT), which can detect and isolate the faults and anomalies in the Programmable Logic Controller (PLC) controlled manufacturing systems effectively.…”
Section: Introductionmentioning
confidence: 99%
“…(Ravikumar et al, 2019; Joshuva and Sugumaran 2020;Gao et al, 2019). The KNN algorithm adopts the Euclidean distance as a measure to determine the k-nearest neighbours, while the KStar uses an entropic distance measure based on probability(Cleary and Trigg 1995).…”
mentioning
confidence: 99%
“…It was demonstrated that the used entropy as a distance measure leads to a high regression and classi cation accuracy and can help in handling symbolic and real-valued data(Gao et al, 2019; Joshuva and Sugumaran 2020). The KStar algorithm proceeds by successive summation of the probabilities of the new attribute to all the remaining members of the category and the nal selection should be based upon the highest probability(Ravikumar et al, 2019).…”
mentioning
confidence: 99%