2016
DOI: 10.1016/j.compag.2016.07.020
|View full text |Cite
|
Sign up to set email alerts
|

Identifying rice grains using image analysis and sparse-representation-based classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
46
2

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 80 publications
(48 citation statements)
references
References 25 publications
0
46
2
Order By: Relevance
“…Alternatively, the Gram-Schmidt orthogonalization can ground on to obtain the common of samples. Assuming that we have given Red, Green, Blue, SWIR and VNIR channels in the form of h × w and converted into the vector format as(v 1 , v 2 , v 3 , v 4 , v 5 ), respectively. First, the mean of vectors, , has obtained and removed from obtained vectors.…”
Section: Common Vector Approach-based Fusionmentioning
confidence: 99%
See 1 more Smart Citation
“…Alternatively, the Gram-Schmidt orthogonalization can ground on to obtain the common of samples. Assuming that we have given Red, Green, Blue, SWIR and VNIR channels in the form of h × w and converted into the vector format as(v 1 , v 2 , v 3 , v 4 , v 5 ), respectively. First, the mean of vectors, , has obtained and removed from obtained vectors.…”
Section: Common Vector Approach-based Fusionmentioning
confidence: 99%
“…In Kuo et al ., 30 different groups of rice grains were investigated using a sparse representation‐based classification (SRC), with 89.10% accuracy. In Li et al ., three groups of peanuts (single peanuts, double peanuts, and triple peanuts) were handled using the three different promising feature extraction methods, including the convolution neural network (CNN), the histogram of gradient (HOG), and Hu invariant moments.…”
Section: Introductionmentioning
confidence: 99%
“…Four groups were used as training data for developing the model, and the remaining group was retained as validation data for testing the classifier. The process was repeated for five times, with each of the groups used once as the validation data (Kuo, Chung, Chen, Lin, & Kuo, ).…”
Section: Resultsmentioning
confidence: 99%
“…The seed colors were quantified by measuring the distribution of RGB (red green and blue) colors in rice grains. Kuo et al (2016) discriminated rice grain by quantifying morphological parameters, texture and color with an accuracy of 89.1%. Adnan et al (2015) stated that round parameter had a considerable coefficient compared to other parameters, therefore it influenced the mode classified for different rice varieties.…”
Section: Rice Seed Identificationmentioning
confidence: 99%
“…Nurcahyani and Saptono (2015) used a smartphone for identifying husked rice quality with 96.67% accuracy. Kuo et al (2016) modified cameras and microscopes to identify rice quality and resulted 89.1% accuracy. Chaugule and Mali (2016) used a special camera to classify rice seeds based on seed angle.…”
Section: Introductionmentioning
confidence: 99%