2022
DOI: 10.48550/arxiv.2202.00791
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Mars Terrain Segmentation with Less Labels

Abstract: Planetary rover systems need to perform terrain segmentation to identify drivable areas as well as identify specific types of soil for sample collection. The latest Martian terrain segmentation methods rely on supervised learning which is very data hungry and difficult to train where only a small number of labeled samples are available. Moreover, the semantic classes are defined differently for different applications (e.g., rover traversal vs. geological) and as a result the network has to be trained from scra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 27 publications
0
8
0
Order By: Relevance
“…For the cases of gathering data in an unsupervised manner or with scarce labels, such as an autonomous visual data collection by a mobile robot, recent advances in self-supervised contrastive learning offer the advantage of optimizing the learning capabilities of the designed model or operating in conjunction with the semisupervised learning approach that is tailored to the downstream task that is examined. For instance, in [15], using the popular approach of SimCLR [9], they perform a martian terrain segmentation analysis with limited data corresponding to classes such as oil, bedrock, sand,big rock, rover and background. Using supervised contrastive learning, Gao et al [14] manually label a set of anchor patches in their effort to efficiently create a feature representation that is able to distinguish different traversability regions.…”
Section: Unsupervised and Semi-supervisedmentioning
confidence: 99%
“…For the cases of gathering data in an unsupervised manner or with scarce labels, such as an autonomous visual data collection by a mobile robot, recent advances in self-supervised contrastive learning offer the advantage of optimizing the learning capabilities of the designed model or operating in conjunction with the semisupervised learning approach that is tailored to the downstream task that is examined. For instance, in [15], using the popular approach of SimCLR [9], they perform a martian terrain segmentation analysis with limited data corresponding to classes such as oil, bedrock, sand,big rock, rover and background. Using supervised contrastive learning, Gao et al [14] manually label a set of anchor patches in their effort to efficiently create a feature representation that is able to distinguish different traversability regions.…”
Section: Unsupervised and Semi-supervisedmentioning
confidence: 99%
“…Contrastive learning has emerged as an important self-supervised learning technique where a network is trained on unlabeled data to maximize agreement between randomly augmented views of the same image and minimize agreement between those of different images [3,4]. By using these contrastivepretrained weights (as opposed to supervised pretrained weights) as a starting point for supervised finetuning, it has been shown that contrastive pretraining improves performance on Mars terrain segmentation when only limited annotated images are available [8]. This work extends prior work by finetuning the generalized representations obtained through contrastive pretraining on mixeddomain datasets to improve performance across multiple missions.…”
Section: Planetary Computer Visionmentioning
confidence: 99%
“…The choice of pretrained weights used to initialize the encoder has been shown to play an important role in downstream segmentation performance. In particular, transferring ResNet weights that were pretrained using the SimCLR framework on unlabeled ImageNet images [4] enables better Mars terrain segmentation performance under limited labels compared to ResNet weights that were pretrained in a supervised fashion (i.e., with labels) on ImageNet [8]. In this work, we investigate whether the contrastive pretraining framework is ad-vantageous and generalizes weights for different missions (M2020 vs. MSL) on a mixed domain dataset.…”
Section: Semi-supervised Finetuningmentioning
confidence: 99%
See 1 more Smart Citation
“…A Simple Framework for Contrastive Learning (SimCLR, proposed in [4] and improved upon in [5]) has been used to train discriminant and performant feature extractors in a self-supervised manner across many domains. In [9], a deep segmentation network is pretrained on unlabeled images using SimCLR and trained further in a supervised manner on a limited set of labeled segmentation data (only 161 images). This approach outperforms fully supervised learning approaches by 2 − 10%.…”
Section: Related Workmentioning
confidence: 99%