2019
DOI: 10.3390/s19112553
|View full text |Cite
|
Sign up to set email alerts
|

Extraction and Research of Crop Feature Points Based on Computer Vision

Abstract: Based on computer vision technology, this paper proposes a method for identifying and locating crops in order to successfully capture crops in the process of automatic crop picking. This method innovatively combines the YOLOv3 algorithm under the DarkNet framework with the point cloud image coordinate matching method, and can achieve the goal of this paper very well. Firstly, RGB (RGB is the color representing the three channels of red, green and blue) images and depth images are obtained by using the Kinect v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 25 publications
0
9
0
Order By: Relevance
“…Here, VGG16 (Visual Geometry Group 16) [ 30 , 31 , 32 ], and the Inception v2 [ 33 ] algorithms are employed as feature extractors with Faster RCNN. Likewise, the YOLOv3 module makes use of Darknet-19 as a feature extractor [ 34 , 35 , 36 ]. The object detection algorithms are trained on the same image dataset and the same number of epochs.…”
Section: Resultsmentioning
confidence: 99%
“…Here, VGG16 (Visual Geometry Group 16) [ 30 , 31 , 32 ], and the Inception v2 [ 33 ] algorithms are employed as feature extractors with Faster RCNN. Likewise, the YOLOv3 module makes use of Darknet-19 as a feature extractor [ 34 , 35 , 36 ]. The object detection algorithms are trained on the same image dataset and the same number of epochs.…”
Section: Resultsmentioning
confidence: 99%
“…Here, MobileNet and Inception v2 [ 45 ] classifiers were used with SSD for feature extraction task [ 46 , 47 , 48 ]. Similarly, darknet19 feature extractor is used in Yolo v2 module [ 49 , 50 , 51 ]. The three detection frameworks are trained with the same lizard and insect image dataset and similar amount of training time.…”
Section: Resultsmentioning
confidence: 99%
“…Figure A1a,b show an RGB color image and a depth image of an SSB, respectively. According to the 3D point cloud reconstruction process, 3D point clouds of the four SSBs with various diameters (30,40,50, and 60 cm) were reconstructed based on images captured by the Kinect sensor in three positions (P 1 , P 2 , and P 3 ) and at three combinations of AOVs (V 3 , V 4 , and V 6 ). In addition, a reference point cloud consisting of 90,000 points was constructed for each of the four SSBs.…”
Section: Appendix Amentioning
confidence: 99%
“…Stereoscopic vision [19,20], depth camera (Kinect sensor or TOF camera) [21][22][23][24][25], and 3D laser lidar [26][27][28] sensors can only capture two-and-a-half-dimensional (2.5D) depth images at a single angles of view (AOV). Kinect sensor-based 3D plant reconstruction can be divided into single-view [29][30][31] and multiview reconstruction [32][33][34], the latter mainly using the iterative closest point (ICP) algorithm [24,25]. However, we first need to solve the problem regarding the rough registration of multiview point clouds, otherwise ICP cannot be used for accurate registration.…”
Section: Introductionmentioning
confidence: 99%