Hemispherical (fisheye) photography is a well-established approach for estimating the sky view factor (SVF). High-resolution urban models from LiDAR and oblique airborne photogrammetry can provide continuous SVF estimates over a large urban area, but such data are not always available and are difficult to acquire. Street view panoramas have become widely available in urban areas worldwide: Google Street View (GSV) maintains a global network of panoramas excluding China and several other countries; Baidu Street View (BSV) and Tencent Street View (TSV) focus their panorama acquisition efforts within China, and have covered hundreds of cities therein. In this paper, we approach this issue from a big data perspective by presenting and validating a method for automatic estimation of SVF from massive amounts of street view photographs. Comparisons were made with SVF estimates derived from two independent sources: a LiDAR-based Digital Surface Model (DSM) and an oblique airborne photogrammetry-based 3D city model (OAP3D), resulting in a correlation coefficient of 0.863 and 0.987, respectively. The comparisons demonstrated the capacity of the proposed method to provide reliable SVF estimates. Additionally, we present an application of the proposed method with about 12,000 GSV panoramas to characterize the spatial distribution of SVF over Manhattan Island in New York City. Although this is a proof-of-concept study, it has shown the potential of the proposed approach to assist urban climate and urban planning research. However, further development is needed before this approach can be finally delivered to the urban climate and urban planning communities for practical applications.
The temporal and spatial distribution of solar energy in urban areas is highly variable because of the complex building structures present. Traditional GIS-based solar radiation models rely on two-dimensional (2D) digital elevation models to calculate insolation, without considering building facades and complicated three-dimensional (3D) shading effects. Inspired by the 'texture baking' technique used in computer graphics, we propose a full 3D method for computing and visualizing urban solar radiation based on image-space data representation. First, a surface mapping approach is employed to project each 3D triangular mesh onto a 2D raster surface whose cell size determines the calculation accuracy. Second, the positions and surface normal vectors of each 3D triangular mesh are rasterized onto the associated 2D raster using barycentric interpolation techniques. An efficient compute unified device architecture -accelerated shadow-casting algorithm is presented to accurately capture shading effects for largescale 3D urban models. Solar radiation is calculated for each raster cell based on the input raster layers containing such information as slope, aspect, and shadow masks. Finally, a resulting insolation raster layer is produced for each triangular mesh and is represented as an RGB texture map using a color ramp. Because a virtual city can be composed of tens of thousands of triangular meshes and texture maps, a texture atlas technique is presented to merge thousands of small images into a single large image to batch draw calls and thereby efficiently render a large number of textured meshes on the graphics processing unit.
Virtual geographic environments (VGEs) are extensively used to explore the relationship between humans and environments. Crowd simulation provides a method for VGEs to represent crowd behaviors that are observed in the real world. The social force model (SFM) can simulate interactions among individuals, but it has not sufficiently accounted for inter-group and intra-group behaviors which are important components of crowd dynamics. We present the social group force model (SGFM), based on an extended SFM, to simulate group behaviors in VGEs with focuses on the avoiding behaviors among different social groups and the coordinate behaviors among subgroups that belong to one social group. In our model, psychological repulsions between social groups make them avoid with the whole group and group members can stick together as much as possible; while social groups are separated into several subgroups, the rear subgroups try to catch up and keep the whole group cohesive. We compare the simulation results of the SGFM with the extended SFM and the phenomena in videos. Then we discuss the function of Virtual Reality (VR) in crowd simulation visualization. The results indicate that the SGFM can enhance social group behaviors in crowd dynamics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.