2021
DOI: 10.1016/j.crbeha.2021.100044
|View full text |Cite
|
Sign up to set email alerts
|

An in-depth analysis of machine learning approaches to predict depression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
47
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 94 publications
(49 citation statements)
references
References 24 publications
2
47
0
Order By: Relevance
“…Feature selection represents a step in the process in which a subset of variables (features) is selected to most accurately predict the targeted variable. Among the more popular approaches are the clustering (e.g., K-means or its variations), particle swarm optimization (PSO), RELIEFF, multivariate processing, and Boruta algorithm approaches [ 69 ]. The K-best approach is a K-nearest neighbor-based clustering feature selection technique.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Feature selection represents a step in the process in which a subset of variables (features) is selected to most accurately predict the targeted variable. Among the more popular approaches are the clustering (e.g., K-means or its variations), particle swarm optimization (PSO), RELIEFF, multivariate processing, and Boruta algorithm approaches [ 69 ]. The K-best approach is a K-nearest neighbor-based clustering feature selection technique.…”
Section: Resultsmentioning
confidence: 99%
“…The K-best approach is a K-nearest neighbor-based clustering feature selection technique. It is non-parametric and univariate in nature [ 69 ]. It is used to select K-best features from the feature set using an univariate statistical test.…”
Section: Resultsmentioning
confidence: 99%
“…Feature/label selection : here we assessed the capabilities of two different approaches, namely “SelectKBest” [45, 46] and decision tree-based ensemble learning algorithms [47] to select a set of clinical labels from the pool of original clinical data. The SelectKBest algorithm uses statistical measures to score input features based on their relation to outputs and chooses the most effective features, accordingly.…”
Section: Methods and Computational Approachmentioning
confidence: 99%
“…In the current study, an optimal number of required components in the PCA was found by using various numbers of extracted features. The output datasets from PCA were then fed into seven conventional classifiers including, SVM [38], MLP [46], KNN [47], random forest [48], gradient boosting [49], Gaussian naïve bayes [50], and XGBoost [51] to assess which number of feature-sets could lead to an optimal performance. This was accordingly found to be associated with a set of 25 components.…”
Section: Dimension Reductionmentioning
confidence: 99%
“…This is achieved through different feature selection procedures which are compared during model training to select a method which can produce the most optimal features. The feature selection methods which have been used in healthcare prediction [6], sequential feature selection [3], optimizer [7] and SelectKBest [8]. Expert judgement can also be used by experienced personnel to select features but the automated methods have proved to give better performance [4].…”
Section: Feature Selection and ML Algorithmsmentioning
confidence: 99%