2022
DOI: 10.21203/rs.3.rs-1851240/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Self-Supervised Overlapped Multiple Weed and Crop Species Leaf Segmentation under Complex Light Condition

Abstract: Weeds are unwanted plants that compete with target crops and absorbed the required nutrients from the soil, sunlight, air, etc. The farmers are suffering from weed identification detection due to the homogeneous morphological feature of weed and crop leaves. Computer vision is a sophisticated technique which widely used for weed and crop leaves identification and detection in the agricultural field. This work has used the three different datasets such as “Deep weed”, “Crop Weed Filed Image Dataset” (CWFID), an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…performing image transformation techniques and then trained it on four YOLO models, which a mAP value of 73.1%. Mishra et al (2024) proposed a deep learning segmentation model named "Pyramid Scene Parsing Network-USegNet" (PSPUSegNet), and by comparing with UNet, SegNet, and USegNet, etc., PSPUSegNet obtained 96.98% accuracy, 97.98% recall, and 98.96% data accuracy in Deep Weed dataset. Compared to the above studies, the proposed CSCW-YOLOV7 shows promising performance, though the five weed datasets constructed in this paper present a complex phenotype scene with similarity between wheat and weeds, multiscale weeds, and overlapping weeds.…”
Section: Results Comparison With Existing Solutionsmentioning
confidence: 99%
“…performing image transformation techniques and then trained it on four YOLO models, which a mAP value of 73.1%. Mishra et al (2024) proposed a deep learning segmentation model named "Pyramid Scene Parsing Network-USegNet" (PSPUSegNet), and by comparing with UNet, SegNet, and USegNet, etc., PSPUSegNet obtained 96.98% accuracy, 97.98% recall, and 98.96% data accuracy in Deep Weed dataset. Compared to the above studies, the proposed CSCW-YOLOV7 shows promising performance, though the five weed datasets constructed in this paper present a complex phenotype scene with similarity between wheat and weeds, multiscale weeds, and overlapping weeds.…”
Section: Results Comparison With Existing Solutionsmentioning
confidence: 99%