The 2012 Data Fusion Contest organized by the Data Fusion Technical Committee (DFTC) of the IEEE Geoscience and Remote Sensing Society (GRSS) aimed at investigating the potential use of very high spatial resolution (VHR) multi-modal/multi-temporal image fusion. Three different types of data sets, including spaceborne multi-spectral, spaceborne synthetic aperture radar (SAR), and airborne light detection and ranging (LiDAR) data collected over the downtown San Francisco area were distributed during the Contest. This paper highlights the three awarded research contributions which investigate (i) a new metric to assess urban density (UD) from multi-spectral and LiDAR data, (ii) simulation-based techniques to jointly use SAR and LiDAR data for image interpretation and change detection, and (iii) radiosity methods to improve surface reflectance retrievals of optical data in complex illumination environments.In particular, they demonstrate the usefulness of LiDAR data when fused with optical or SAR data. We believe these interesting investigations will stimulate further research in the related areas.
Because of the all-weather and all-time data acquisition capability, high resolution space borne synthetic aperture radar (SAR) plays an important role in remote sensing applications like earth mapping. However, the visual interpretation of SAR images is usually difficult, especially for urban areas. This paper shows a method for visual interpreting SAR images by means of optical and SAR images simulated from digital elevation models (DEM), which are derived from LiDAR data. The simulated images are automatically geocoded and enable a direct comparison with the real SAR image. An application for the simulation concept is presented for the city center of Munich where the comparison to the TerraSAR-X data shows good similarity. The simulated optical image can be used for direct and quick identification of objects in the corresponding SAR image. Additionally, simulated SAR image can separate multiple reflections mixed in the real SAR image, thus enabling easier interpretation of an urban scene.I.
The grammar of facade structures is often related to regularly distributed signature patterns in high-resolution synthetic aperture radar (SAR) images. Given those patterns in the imagery, they should be used as a source of information for identifying changes related to the facade. This paper presents a method for characterizing the layover area pertinent to regularly arranged facade structures, formulated on a general basis for single azimuth/range SAR images and geocoded SAR images. The analysis follows assumptions on the intensity distribution, the linear arrangement, and the regularity of point-like signatures. Two case studies on facades are presented, which confirm the applicability of the method for different building types. Based on that, the potentials and limitations of the algorithm are discussed with respect to applications such as change detection and persistent scatterer interferometry.
This paper presents two change detection strategies based on the fusion of scene knowledge and two high resolution SAR images (pre-event, post-event) with focus on individual buildings and facades. Avoiding the dependence of the signal incidence angle, the methods increase the flexibility with respect to near-real-time SAR image analysis after unexpected events. Knowledge of the scene geometry is provided by digital surface models, which are integrated into an automated simulation processing chain. Using strategy 1 (based on building fill ratio; BFR), building changes are detected based on change-ratios considering layover and shadow areas. Strategy 2 (based on wall fill position; WFP) enables one to analyze individual facades of buildings without clear decision from strategy 1, which is based on a geometric projection of facade layover pixels. In a case study (Munich city center), the sensitivity of the change detection methods is exemplified with respect to destroyed buildings and partly changed buildings. The results confirm the significance of integrating prior knowledge from digital surface models into the analysis of high resolution SAR images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.