2016
DOI: 10.1155/2016/4265042
|View full text |Cite
|
Sign up to set email alerts
|

Robotic Visual Tracking of Relevant Cues in Underwater Environments with Poor Visibility Conditions

Abstract: Using visual sensors for detecting regions of interest in underwater environments is fundamental for many robotic applications. Particularly, for an autonomous exploration task, an underwater vehicle must be guided towards features that are of interest. If the relevant features can be seen from the distance, then smooth control movements of the vehicle are feasible in order to position itself close enough with the final goal of gathering visual quality images. However, it is a challenging task for a robotic sy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 27 publications
0
7
0
Order By: Relevance
“…Such visual attention-based cues are eventually exploited to make important navigational and other operational decisions. The classical approaches utilize features such as luminance, color, texture, and often depth information to extract salient features for enhanced object detection or template identification [39], [40], [41]. In recent years, the standard oneshot object detection models based on large-scale supervised learning are effectively applied for vision-based tracking and following [42], [24].…”
Section: B Visual Attention Modeling and Servoingmentioning
confidence: 99%
“…Such visual attention-based cues are eventually exploited to make important navigational and other operational decisions. The classical approaches utilize features such as luminance, color, texture, and often depth information to extract salient features for enhanced object detection or template identification [39], [40], [41]. In recent years, the standard oneshot object detection models based on large-scale supervised learning are effectively applied for vision-based tracking and following [42], [24].…”
Section: B Visual Attention Modeling and Servoingmentioning
confidence: 99%
“…The early approaches date back to the work of Edgington et al [10] that uses binary morphology filters to extract salient features for automated event detection. Subsequent approaches adopt various feature contrast evaluation techniques that encode lowlevel image-based features (e.g., color, luminance, texture, object shapes) into super-pixel descriptors [33], [34], [36]. These low-dimensional representations are then exploited by heuristics or learning-based models to infer global saliency.…”
Section: B Sod and Svam By Underwater Robotsmentioning
confidence: 99%
“…To this end, traditional approaches based on various feature contrast evaluation techniques [5], [12], [33] are often practical choices for saliency estimation by visually-guided underwater robots. These techniques encode low-level image-based features (e.g., color, texture, object shapes or contours) into superpixel descriptors [34], [35], [33], [36], [9] to subsequently infer saliency by quantifying their relative distinctness on a global scale. Such bottom-up approaches are computationally light and are useful as pre-processing steps for faster visual search [34], [2] and exploration tasks [5], [11].…”
Section: Introductionmentioning
confidence: 99%
“…In the underwater domain, however, existing research work mainly focuses on salient feature extraction for enhanced object detection performance [19,52,71]. Hence, they do not provide a general solution for attention modeling that can facilitate faster visual search or better scene understanding.…”
Section: Introductionmentioning
confidence: 99%