2019
DOI: 10.1016/j.ifacol.2019.12.508
|View full text |Cite
|
Sign up to set email alerts
|

Towards Active Robotic Vision in Agriculture: A Deep Learning Approach to Visual Servoing in Occluded and Unstructured Protected Cropping Environments

Abstract: 3D Move To See (3DMTS) is a mutli-perspective visual servoing method for unstructured and occluded environments, like that encountered in robotic crop harvesting. This paper presents a deep learning method, Deep-3DMTS for creating a single-perspective approach for 3DMTS through the use of a Convolutional Neural Network (CNN). The novel method is developed and validated via simulation against the standard 3DMTS approach. The Deep-3DMTS approach is shown to have performance equivalent to the standard 3DMTS basel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(12 citation statements)
references
References 14 publications
0
12
0
Order By: Relevance
“…Faster and more accurate vision algorithms should be developed, that work together with soft, conflict-free end effectors, as well as manipulators suitable for trees. Already, various papers are trying to address the above-mentioned problems, for example, Cornell University recently published a paper presenting a deep learning method that improves the performance of a robotic arm end-effector in unstructured and occluded environments [127], and at the same time there is research ongoing for novel end-effector designs that fit the needs of specific crops [128]. At this point it is worth mentioning that it is essential for developers to work closely with horticulturists to come up with suitable tree training methods, that facilitate robotic harvesting and at the same time provide high yield and high-quality products.…”
Section: Harvesting Robotic Systemsmentioning
confidence: 99%
“…Faster and more accurate vision algorithms should be developed, that work together with soft, conflict-free end effectors, as well as manipulators suitable for trees. Already, various papers are trying to address the above-mentioned problems, for example, Cornell University recently published a paper presenting a deep learning method that improves the performance of a robotic arm end-effector in unstructured and occluded environments [127], and at the same time there is research ongoing for novel end-effector designs that fit the needs of specific crops [128]. At this point it is worth mentioning that it is essential for developers to work closely with horticulturists to come up with suitable tree training methods, that facilitate robotic harvesting and at the same time provide high yield and high-quality products.…”
Section: Harvesting Robotic Systemsmentioning
confidence: 99%
“…A machine learning approach is proposed in [1], built on top of [2]. In particular, as opposed to [2], a convolutional neural network (CNN) is utilized in [1], to estimate the gradient of the optimization function, making the method also applicable by utilizing only a single camera setup, instead of 9 cameras required in [2]. However, the training of the CNN requires a large and representative training dataset.…”
Section: Machine Learning Based Approachesmentioning
confidence: 99%
“…fruits, stalks, or stems, are in many cases either partially or completely occluded, when the robot attempts to acquire information related to them. Furthermore, in most cases, the fruit cannot be concretely modelled, as it involves structural uncertainties and its location is not known a priori [1]. Therefore, the challenge of visually unveiling the OOI, by tackling the problem of avoiding possible occlusions, e.g.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Concerning detection, recognition, and positioning of especially clustered fruits, overlapping fruits and occluded fruits, there have been a number of studies on these aspects (e.g., Luo et al 2016;Mehta et al 2017;Le et al 2019;Lv et al 2019b;Yu et al 2019;Zapotezny-Anderson and Lehnert 2019;Ge et al 2019;Zhang and Gao 2020;Gené-Mola et al 2020;Kang and Chen 2020). From these studies, some FIGURE 1.…”
Section: Introductionmentioning
confidence: 99%