In this study, a limestone rock core specimen with 6.94 cm x 4.95 cm dimensions was exposed to tensile force by Brazilian test and rough surfaces were obtained. Following the Brazilian test, roughness angles were measured by a laser scanner along one side of the rock specimen. For this purpose, Nextengine 3D Desktop scanner was used. The 17 profiles were studied along the width of the core with a 0.3 mm interval. Approximately 10000 points produced for each profile, some of them are in the "+" and some are in the "-" direction along each profile. Maximum and minimum roughness angles are calculated as 65.580 and 1.56x10-5 degree respectively. The average roughness angle value of the profiles is 13.870. The percentage of the roughness angle between 13 and 14 degrees were 2.65% and 2.70% for "-" and "+" directions on the rock surface, respectively. Mathematical analyses of 17 profiles showed that roughness profiles can be expressed by 21st -30th degree polynomial equations with approximately 10-4 degree standard deviation.
In geospatial applications such as urban planning and land use management, automatic detection and classification of earth objects are essential and primary subjects. When the significant semantic segmentation algorithms are considered, DeepLabV3+ stands out as a state-of-the-art CNN. Although the DeepLabV3+ model is capable of extracting multi-scale contextual information, there is still a need for multi-stream architectural approaches and different training approaches of the model that can leverage multi-modal geographic datasets. In this study, a new end-to-end dual-stream architecture that considers geospatial imagery was developed based on the DeepLabV3+ architecture. As a result, the spectral datasets other than RGB provided increments in semantic segmentation accuracies when they were used as additional channels to height information. Furthermore, both the given data augmentation and Tversky loss function which is sensitive to imbalanced data accomplished better overall accuracies. Also, it has been shown that the new dual-stream architecture using Potsdam and Vaihingen datasets produced 88.87% and 87.39% overall semantic segmentation accuracies, respectively. Eventually, it was seen that enhancement of the traditional significant semantic segmentation networks has a great potential to provide higher model performances, whereas the contribution of geospatial data as the second stream to RGB to segmentation was explicitly shown.
Boundary extraction in remote sensing has an important task in studies such as environmental observation, risk management and monitoring urban growth. Although significant progress has been made in the different calculation methods proposed, there are issues that need improvement, especially in terms of accuracy, efficiency and speed. In this study, dual stream network architecture of three different models that can obtain boundary extraction by using normalized Digital Surface Model (nDSM), Normalized Difference Vegetation Index (NDVI) and Near-Infrared (IR) band as the second stream, was explained. Model I is designed as the original HED, whereas the second stream of Model II, III, and IV use nDSM, nDSM + NDVI and nDSM + NDVI + IR, respectively. Thus, by comparing the models trained based on different data combinations, the contribution of different input data to the success of boundary extraction was revealed. For the training of the models, boundary maps produced from The International Society for Photogrammetry and Remote Sensing (ISPRS) Potsdam data set and input datasets augmented by rotation, mirroring and rotation were used. When the test results obtained from two-stream and multidata-based models are evaluated, 11% higher recall values have achieved with Model IV compared to the original HED. The outcomes clearly revealed the importance of using multispectral band, height data and vegetation information as input data in boundary extraction beside commonly used RGB images.
ABSTRACT:Co-registration of point clouds of partially scanned objects is the first step of the 3D modeling workflow. The aim of coregistration is to merge the overlapping point clouds by estimating the spatial transformation parameters. In computer vision and photogrammetry domain one of the most popular methods is the ICP (Iterative Closest Point) algorithm and its variants. There exist the 3D Least Squares (LS) matching methods as well (Gruen and Akca, 2005). The co-registration methods commonly use the least squares (LS) estimation method in which the unknown transformation parameters of the (floating) search surface is functionally related to the observation of the (fixed) template surface. Here, the stochastic properties of the search surfaces are usually omitted. This omission is expected to be minor and does not disturb the solution vector significantly. However, the a posteriori covariance matrix will be affected by the neglected uncertainty of the function values of the search surface. . This causes deterioration in the realistic precision estimates. In order to overcome this limitation, we propose a method where the stochastic properties of both the observations and the parameters are considered under an errors-in-variables (EIV) model. The experiments have been carried out using diverse laser scanning data sets and the results of EIV with the ICP and the conventional LS matching methods have been compared.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.