2014
DOI: 10.17690/0414241.1
|View full text |Cite
|
Sign up to set email alerts
|

Automated image-based reconstruction of building interiors – a case study

Abstract: ABSTRACT

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…Recent studies (Furukawa et al., ; Georgantas et al., ; Xiao and Furukawa, ) show that automatic techniques, such as dense image matching, are suitable provided that the environment supports automatic feature matching, for example, using the scale‐invariant feature transform (SIFT; Lowe, ). However, these automated methods are sensitive to errors caused by reflective or textureless surfaces (Furukawa et al., ; Jancosek and Pajdla, ; Lehtola et al., ), even if the camera is pre‐calibrated in a laboratory (Fig. ).…”
Section: Alternatives For Indoor Reconstructionmentioning
confidence: 99%
“…Recent studies (Furukawa et al., ; Georgantas et al., ; Xiao and Furukawa, ) show that automatic techniques, such as dense image matching, are suitable provided that the environment supports automatic feature matching, for example, using the scale‐invariant feature transform (SIFT; Lowe, ). However, these automated methods are sensitive to errors caused by reflective or textureless surfaces (Furukawa et al., ; Jancosek and Pajdla, ; Lehtola et al., ), even if the camera is pre‐calibrated in a laboratory (Fig. ).…”
Section: Alternatives For Indoor Reconstructionmentioning
confidence: 99%
“…Large, smooth, uniform and mono-coloured surfaces are problematic to capture and reconstruct accurately with photogrammetry (Lehtola, Kurkela, & Hyyppä, 2014). Such surfaces lack the unique features that are essential for photogrammetric reconstruction.…”
Section: Discussionmentioning
confidence: 99%
“…Reference values for k 1 are in pixels per focal length units, while EPOS η values are in the chosen pixel per w/4 units. For result comparison, η results are transformed into k 1 units in Figure 5 and Table 1 by using a multiplication factor γ² = ( f /a) 2 , where f is the focal length from reference bundle adjustment, and a = w/4 is the chosen unit scale in EPOS. The checkerboard image (top) is corrected using the converged value of η obtained from checkerboard data; see Figure 5.…”
Section: Simulated Datamentioning
confidence: 99%
“…In contrast to capturing objects by encircling them with a camera, which is a well-studied problem (see, e.g., [1]), problems arise when capturing large-scale (indoor) environments with minimum image overlap in order to reduce the effort of image acquisition. Such a method, if "black-boxed", would enable multiple applications, for example, in real-estate management and brokering (see, e.g., [2]). …”
Section: Introductionmentioning
confidence: 99%