2020 22nd International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC) 2020
DOI: 10.1109/synasc51798.2020.00050
|View full text |Cite
|
Sign up to set email alerts
|

Image Semantic Segmentation Based on High-Resolution Networks for Monitoring Agricultural Vegetation

Abstract: In the article, recognition of state of agricultural vegetation from aerial photographs at various spatial resolutions was considered. Proposed approach is based on a semantic segmentation using convolutional neural networks. Two variants of High-Resolution network architecture (HRNet) are described and used. These neural networks were trained and applied to aerial images of agricultural fields. In our experiments, accuracy of four land classes recognition (soil, healthy vegetation, diseased vegetation and oth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 21 publications
0
1
0
Order By: Relevance
“…In assessing the precision of brain image segmentation, this study employed Intersection over Union (IoU) metrics. These metrics determine the proportion of pixels shared between the target and predicted masks, dividing this by the total area of the present crosswise mask in the overall image [28]. The intersection (goal pixels ꓵ prediction pixels) is comprised of all pixels found in both the prediction mask and the ground truth mask.…”
Section: B Segmentation Accuracymentioning
confidence: 99%
“…In assessing the precision of brain image segmentation, this study employed Intersection over Union (IoU) metrics. These metrics determine the proportion of pixels shared between the target and predicted masks, dividing this by the total area of the present crosswise mask in the overall image [28]. The intersection (goal pixels ꓵ prediction pixels) is comprised of all pixels found in both the prediction mask and the ground truth mask.…”
Section: B Segmentation Accuracymentioning
confidence: 99%
“…Then, the intermediate features are flattened and then concatenated together to form the final representations f ms pi x ∈ R C×S and f ms cate ∈ R N ×S . This process can be formulated as, where the output resolution of the pyramid pooling module is set to (1,3,6,8) in order, and S = 1 2 + 3 2 + 6 2 + 8 2 = 110.…”
Section: Category Context Ensemble Modulementioning
confidence: 99%
“…As can be seen in the first row, the NANet achieves an accuracy of 82.52% when the PPM module is not added. When the PPM is added, there are three choices for the output size of the PPM: (1, 2, 3, 6), (1,3,6,8) and (1,4,8,12). The three outputs were flattened and concatenated together to obtain the number of sampling points as 50, 110 and 225, respectively.…”
Section: Ablation Study For Ppmmentioning
confidence: 99%
See 1 more Smart Citation
“…In recent years, with the development of deep learning technology, semantic segmentation methods based on deep learning have also been used in a large number of natural image segmentation fields [10][11][12][13][14][15] . For plant fruit segmentation, Kang and Chen [16] used a deep convolutional neural network (entitled DaSNet) for real-time detection and semantic segmentation of apples in apple orchards, and finally obtained a segmentation accuracy of 86.5% for apples.…”
Section: Introduction mentioning
confidence: 99%