2016
DOI: 10.1002/rob.21698
|View full text |Cite
|
Sign up to set email alerts
|

Robotic Coral Reef Health Assessment Using Automated Image Analysis

Abstract: This paper presents a system capable of autonomous surveillance and analysis of coral reef ecosystems using natural lighting. We describe our strategy to safely and effectively deploy a small marine robot to inspect a reef using its digital cameras. Image analysis using a (RBF‐SVM) radial basis function‐support vector machines in combination with (LBP) local binary pattern, Gabor and Hue descriptors developed in this work are able to analyze the resulting image data automatically and reliably by learning from … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 52 publications
0
15
0
Order By: Relevance
“…More recent approaches are shifting to semantic segmentation, which is able to give more detailed information (pixel‐level) than the only classification. The first approaches performed image patch classification to thereafter reconstruct the segmentation of the entire image (Manderson et al, ; Shihavuddin et al, ). These kinds of patch‐based approaches, however, typically have low accuracy near the edges of the segmented regions.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…More recent approaches are shifting to semantic segmentation, which is able to give more detailed information (pixel‐level) than the only classification. The first approaches performed image patch classification to thereafter reconstruct the segmentation of the entire image (Manderson et al, ; Shihavuddin et al, ). These kinds of patch‐based approaches, however, typically have low accuracy near the edges of the segmented regions.…”
Section: Related Workmentioning
confidence: 99%
“…When a few annotated pixels are provided, a CNN can be trained on patches cropped around those labeled pixels to get a final image segmentation joining the classification result for each patch. This strategy, which has been successfully applied in existing approaches (Beijbom et al, ; Manderson et al, ), is trained on n‐labeled patches, one per annotation. The training pairs used are of the form (Xd(i,j),y(i,j)) where Xd(i,j) is a patch of dimensions d×d centered around each labeled pixel with coordinates (i,j), and y(i,j) is a scalar representing the label of this pixel.…”
Section: Training Dense Semantic Segmentation With Sparse Pixel Labelsmentioning
confidence: 99%
See 3 more Smart Citations