2020
DOI: 10.1255/jsi.2020.a19
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning classifiers for near infrared spectral imaging: a tutorial

Abstract: Deep learning (DL) has recently achieved considerable successes in a wide range of applications, such as speech recognition, machine translation and visual recognition. This tutorial provides guidelines and useful strategies to apply DL techniques to address pixel-wise classification of spectral images. A one-dimensional convolutional neural network (1-D CNN) is used to extract features from the spectral domain, which are subsequently used for classification. In contrast to conventional classification methods… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 36 publications
0
6
0
Order By: Relevance
“…The classification network structure used in the E2E model was inspired in the work presented in [22] for NIR images. Specifically, the network consists of three convolutional layers with batch normalization and rectified linear unit (ReLU) as the activation function, followed by a dropout to avoid overfitting, and finally, two fully connected layers, where the last one has the softmax activation.…”
Section: Simulations and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The classification network structure used in the E2E model was inspired in the work presented in [22] for NIR images. Specifically, the network consists of three convolutional layers with batch normalization and rectified linear unit (ReLU) as the activation function, followed by a dropout to avoid overfitting, and finally, two fully connected layers, where the last one has the softmax activation.…”
Section: Simulations and Resultsmentioning
confidence: 99%
“…The performance of the proposed approach is demonstrated on two hyperspectral datasets in the NIR region: Cereals [21] in the spectral range of 943-1643 nm, and Yatsuhashi, which covers 1293-2215 nm. The results show better average classification accuracy above 90% for different sensing ratios compared to coding strategies based on the visible range [15] and [22], specifically, when the sensing ratio decreases, where the reached difference is up to 10% of accuracy.…”
Section: Introductionmentioning
confidence: 92%
“…It is assumed that the preprocessing step allowed the reduction of the spectral noise, permitting more accurate edge segmentation. 68 The calculated accuracy metrics revealed that the averaged performance of SAM and CNN is on the same level, with the highest F-score in the 1D-CNN SNV model and the highest accuracy in SAM (Table 1 and Table S1 †). It is also noteworthy that the normalisation of spectra could both decrease the performance of CNN, as shown for the model built with MSC-corrected data, and increase classification accuracy as in the case of applying the SNV-corrected dataset.…”
Section: Dataset Normalisationmentioning
confidence: 92%
“…31,69 Various works in using CNNs for hyperspectral data analysis addressed the problem of extracting and preprocessing reflectance from hyperspectral images. 68,70,71 Typically, the SNV correction stays as an optimal method for preserving the general trend of spectra with unwanted noise reduction, which enhances the classification of neural networks. At the same time, modifications and combinations of traditional preprocessing steps, depending on the material under investigation and spectroscopy conditions, may be used to improve the performance of different models.…”
Section: Dataset Normalisationmentioning
confidence: 99%
“…This indicated underand over-fitting and emphasized the importance of extracting features for LDA with our dataset. We also determined cross-validation accuracy by using different preprocessing combinations based on first and second derivative Savitzky-Golay filtering and SNV (Xu et al, 2020). The results are shown in Fig.…”
Section: Technical Detailsmentioning
confidence: 99%