2013
DOI: 10.1007/978-3-642-37447-0_45
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Changes in Images of Street Scenes

Abstract: Abstract. In this paper we propose an novel algorithm for detecting changes in street scenes when the vehicle revisits sections of the street at different times. The proposed algorithm detects structural geometric changes, changes due to dynamically moving objects and as well as changes in the street appearance (e.g. posters put up) between two traversal times. We exploit geometric, appearance and semantic information to determine which areas have changed and formulate the problem as an optimal image labeling … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(20 citation statements)
references
References 29 publications
(31 reference statements)
0
20
0
Order By: Relevance
“…Yin and Collins (2007) detect moving objects by a Belief Propagation approach using a 3D Markov Random Field (MRF). A similar method has been presented by Košecka (2013) to detect changes from street scene images. Changes are differentiated as structural, appearance change or temporary dynamically moving objects.…”
Section: Change Detection In Computer Visionmentioning
confidence: 80%
See 1 more Smart Citation
“…Yin and Collins (2007) detect moving objects by a Belief Propagation approach using a 3D Markov Random Field (MRF). A similar method has been presented by Košecka (2013) to detect changes from street scene images. Changes are differentiated as structural, appearance change or temporary dynamically moving objects.…”
Section: Change Detection In Computer Visionmentioning
confidence: 80%
“…Laser scanning provides precise 3D geometric information on the environment, which is of great interest for 3D mapping, localization, scene perception, motion tracking and navigation purposes. Studies from computer vision mainly use imagery for city and street scene change detection (Pollard and Mundy, 2007;Sakurada et al, 2013;Košecka, 2013;Eden and Cooper, 2008;Taneja et al, 2011Taneja et al, , 2013. However, lidar (light detection and ranging) data (also referred to as laser scanning data, range data or lidar point clouds) have been proven to be an accurate data source for 3D urban reconstruction (Lafarge and Mallet, 2011;Chauve et al, 2010;Verma et al, 2006;Zhou and Neumann, 2010;Toshev et al, 2010;Banno et al, 2008;Poullis, 2013), infrastructure management and road inventory (Pu et al, 2011; Vosselman, 2012).…”
Section: Introductionmentioning
confidence: 99%
“…[16] instead proposed to detect changes based on the appearance and disappearance of 3D lines detected in the images. [17] also compares two set of images captured at different time instants by first recovering a coarse structure of the scene. The images are segmented into superpixels for this purpose and the corresponding superpixels across the old and new images are compared to reveal changes.…”
Section: Related Workmentioning
confidence: 99%
“…For PC, we introduce a SIFT feature-based PC approach. Based on the literature (e.g., [1]), the LoC of a query live feature is measured according to its dissimilarity to the most similar normal feature. Firstly, every live/map image is represented as a collection of SIFT features with Harris-Laplace keypoints [13].…”
Section: A Pairwise Image Comparision (Pc)mentioning
confidence: 99%
“…For long-term map maintenance in dynamic environments, a robotic visual SLAM system must detect changed objects (e.g., furniture movement, and building construction) in a live image with respect to the map, while ignoring nuisance changes (e.g., sensor noises, registration errors, and occlusions) during long-term multi-session navigation. One approach is to formulate the problem as a pair-wise image comparison (PC), to compare each live-map-image-pair using image differencing techniques [1]. However, this requires that a robot memorizes every map image; hence, scaling to large-size environments is difficult.…”
Section: Introductionmentioning
confidence: 99%