2013 IEEE Conference on Computer Vision and Pattern Recognition 2013
DOI: 10.1109/cvpr.2013.269
|View full text |Cite
|
Sign up to set email alerts
|

Mesh Based Semantic Modelling for Indoor and Outdoor Scenes

Abstract: Semantic reconstruction of a scene is important for a variety of applications such as 3D modelling, object recognition and autonomous robotic navigation. However, most object labelling methods work in the image domain and fail to capture the information present in 3D space. In this work we propose a principled way to generate object labelling in 3D. Our method builds a triangulated meshed representation of the scene from multiple depth estimates. We then define a CRF over this mesh, which is able to capture th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
54
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 87 publications
(54 citation statements)
references
References 31 publications
0
54
0
Order By: Relevance
“…1, is a crucial prerequisite for a variety of applications, including robot navigation [1], [2], semantic mapping [3], [4], wearable and/or assistive technology [5], [6], and change detection [7]. However, despite the large body of literature motivated by such applications [3], [4], [8]- [12], most existing approaches suffer from a variety of limitations. Offline reconstruction methods can achieve impressive results at city scale [13] and beyond, but cannot be used in a real-time setting.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…1, is a crucial prerequisite for a variety of applications, including robot navigation [1], [2], semantic mapping [3], [4], wearable and/or assistive technology [5], [6], and change detection [7]. However, despite the large body of literature motivated by such applications [3], [4], [8]- [12], most existing approaches suffer from a variety of limitations. Offline reconstruction methods can achieve impressive results at city scale [13] and beyond, but cannot be used in a real-time setting.…”
Section: Introductionmentioning
confidence: 99%
“…This has been addressed by approaches that use scalable data structures and stream data between GPU and CPU memory [21], [22], but they use Kinect-like cameras that only work indoors [9], [10]. Approaches working outdoors usually take significant time to run [4], [8], [11], [23], do not work incrementally [12] or rely on LIDAR data [24]. Existing systems also do not cope well with moving objects.…”
Section: Introductionmentioning
confidence: 99%
“…For example, the work in [5] proposes a model isomorphic to a Markov Random Field (MRF) and a rich set of features to represent the scene objects and their relations, while in [3], MRFs are combined with segmentation trees for the recognition of objects. There are also examples relying on Conditional Random Fields (CRFs), like [6], to classify objects into four categories: wall, floor, ceiling and clutter, and the work presented in [7], where the faces of a triangular mesh representing the scene are assigned to object classes. CRFs are also used in [8], [9], and [10], in conjunction with common-sense information codified into an ontology, for the recognition of objects appearing in office and domestic scenes, and in [4] for the modeling of context in RGB images.…”
Section: Related Workmentioning
confidence: 99%
“…Regarding the exploitation of contextual information, one-shot systems are seriously limited by the sensor frustum and possible occlusions, given that they are able to observe only a portion of the objects and relations appearing in the inspected scene. Some approaches cope with this issue by registering a number of observations prior to the recognition process in order to obtain a wider view of the scene [5][6][7][8][9][10]. However, the time and computational resources needed for gathering and registering such observations prevents their use in most robotic applications.…”
mentioning
confidence: 99%
See 1 more Smart Citation