Bas-relief is characterized by its unique presentation of intrinsic shape properties and/or detailed appearance using materials raised up in different degrees above a background. However, many bas-relief modeling methods could not manipulate scene details well. We propose a simple and effective solution for two kinds of bas-relief modeling (i.e., structure-preserving and detail-preserving), which is different from the prior tone mapping alike methods. Our idea originates from an observation on typical 3D models which are decomposed into a piecewise smooth base layer and a detail layer in normal field. Proper manipulation of the two layers contributes to both structure-preserving and detail-preserving bas-relief modeling. We solve the modeling problem in a discrete geometry processing setup that uses normal-based mesh processing as a theoretical foundation. Specifically, using the two-step mesh smoothing mechanism as a bridge, we transfer the bas-relief modeling problem into a discrete space, and solve it in a least-squares manner. Experiments and comparisons to other methods show that (i) geometry details are better preserved in the scenario with high compression ratios, and (ii) structures are clearly preserved without shape distortion and interference from details.
Accurate segmentation of lungs in pathological thoracic computed tomography (CT) scans plays an important role in pulmonary disease diagnosis. However, it is still a challenging task due to the variability of pathological lung appearances and shapes. In this paper, we proposed a novel segmentation algorithm based on random forest (RF), deep convolutional network, and multi-scale superpixels for segmenting pathological lungs from thoracic CT images accurately. A pathological thoracic CT image is first segmented based on multi-scale superpixels, and deep features, texture, and intensity features extracted from superpixels are taken as inputs of a group of RF classifiers. With the fusion of classification results of RFs by a fractional-order gray correlation approach, we capture an initial segmentation of pathological lungs. We finally utilize a divide-and-conquer strategy to deal with segmentation refinement combining contour correction of left lungs and region repairing of right lungs. Our algorithm is tested on a group of thoracic CT images affected with interstitial lung diseases. Experiments show that our algorithm can achieve a high segmentation accuracy with an average DSC of 96.45% and PPV of 95.07%. Compared with several existing lung segmentation methods, our algorithm exhibits a robust performance on pathological lung segmentation. Our algorithm can be employed reliably for lung field segmentation of pathologic thoracic CT images with a high accuracy, which is helpful to assist radiologists to detect the presence of pulmonary diseases and quantify its shape and size in regular clinical practices.
Purpose Several negative factors, such as juxta‐pleural nodules, pulmonary vessels, and image noise, make accurately segmenting lungs from computed tomography (CT) images a complex task. We propose a novel hybrid automated algorithm in the paper based on random forest to deal with the issues. Our method aims to eliminate the effect of the factors and generate accurate segmentation of lungs from CT images. Methods Our algorithm consists of five main steps: image preprocessing, lung region extraction, trachea elimination, lung separation, and contour correction. A lung CT image is first preprocessed with a novel normal vector correlation‐based image denoising approach and decomposed into a group of multiscale subimages. A modified superpixel segmentation method is then performed on the first‐level subimage to generate a set of superpixels, and a random forest classifier is employed to segment the lungs by classifying the superpixels of each subimage‐based on the features extracted from them. The initial lung segmentation result is further refined through trachea elimination using an iterative thresholding approach, lung separation based on context information of image sequence, and contour correction with a corner detection technique. Results Our algorithm is tested on a set of CT images affected with interstitial lung diseases, and experiments show that the algorithm achieves high accuracy on lung segmentation with 0.9638 Jaccard’s index and 0.9867 Dice similarity coefficient, compared with ground truths. Additionally, our algorithm achieves an average 7.7% better Dice similarity coefficient than compared conventional lung segmentation methods and 1% better than Deep Learning. Conclusions Our algorithm can segment lungs from lung CT images with good performance in a fully automatic fashion, and it is of great assistance for lung disease detection in the computer‐aided detection system.
Normal estimation is a crucial first step for numerous light detection and ranging (LiDAR) data-processing algorithms, from building reconstruction, road extraction, and ground-cover classification to scene rendering. For LiDAR point clouds in urban environments, this paper presents a robust method to estimate normals by constructing an octree-based hierarchical representation for the data and detecting a group of large enough consistent neighborhoods at multiscales. Consistent neighborhoods are mainly determined based on the observation that an urban environment is typically comprised of regular objects, e.g., buildings, roads, and the ground surface, and irregular objects, e.g., trees and shrubs; the surfaces of most regular objects can be approximatively represented by a group of local planes. Even in the frequent presence of heavy noise and anisotropic point samplings in LiDAR data, our method is capable of estimating robust normals for kinds of objects in urban environments, and the estimated normals are beneficial to more accurately segment and identify the objects, as well as preserving their sharp features and complete outlines. The proposed method was experimentally validated both on synthetic and real urban LiDAR datasets, and was compared to state-of-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.