2022
DOI: 10.1039/d2ra00239f
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning based on structural and FTIR spectroscopic datasets for seed autoclassification

Abstract: In machine learning, traditional single datasets often cause confusion between species, mixing features to form a mixed dataset will solve this challenge.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 19 publications
0
2
0
1
Order By: Relevance
“…A taxa de aprendizado fixa de 0,16, utilizada no algoritmo BiLSTM, é relativamente alta em comparac ¸ão com valores típicos, que geralmente estão na faixa de 0,1 ou menores [Zlotogorski-Hurvitz et al 2019, Kiser et al 2023. Embora a taxa de aprendizado ideal possa variar dependendo do conjunto de dados e da arquitetura da rede, como no trabalho de [Wang et al 2022], que utilizou o valor de 0,3 com bons resultados.…”
Section: Discussionunclassified
“…A taxa de aprendizado fixa de 0,16, utilizada no algoritmo BiLSTM, é relativamente alta em comparac ¸ão com valores típicos, que geralmente estão na faixa de 0,1 ou menores [Zlotogorski-Hurvitz et al 2019, Kiser et al 2023. Embora a taxa de aprendizado ideal possa variar dependendo do conjunto de dados e da arquitetura da rede, como no trabalho de [Wang et al 2022], que utilizou o valor de 0,3 com bons resultados.…”
Section: Discussionunclassified
“…The exhaustive search required by wrapper-based methods to prevent overfitting also makes them computationally expensive. Wrapper-based algorithms used in NIR include particle swarm optimization (PSO) and binary particle swarm optimization (BPSO) [28], genetic algorithms (GA) [29][30][31][32][33], variable combination population analysis (VCPA) [32,34], the variable iterative space shrinkage approach (VISSA) [35], bootstrapping soft shrinkage (BOSS) [36], iteratively retaining informative variables (IRIV) [32], competitive adaptive reweighted sampling (CARS) [37][38][39], the successive projection algorithm (SPA) [40], uninformative variable elimination (UVE) [41], Monte Carlo uninformative variable elimination (MCUVE) [35], partial least squares feature selection approaches [42], the randomization test (RT) [43], variable importance in the projection (VIP) [44], and the jackknife procedure.…”
Section: Feature Selectionmentioning
confidence: 99%
“…These steps are iteratively repeated until the stopping criteria, such as the number of maximum iterations or the desired cross-validation accuracy, are reached. CARS was used to determine the quality of oilseed [37], rice [38], and seeds [39]. To find the rice-grain moisture NIR spectroscopy [38], the partial least squares (PLS) and competitive adaptive reweighted squares (CARS) models were used to model and analyze the spectral data.…”
Section: Competitive Adaptive Reweighted Sampling (Cars)mentioning
confidence: 99%