Recent works showed that floating polygons can be an interesting alternative to traditional superpixels, especially for analyzing scenes with strong geometric signatures, as man-made environments. Existing algorithms produce homogeneously-sized polygons that fail to capture thin geometric structures and over-partition large uniform areas. We propose a kinetic approach that brings more flexibility on polygon shape and size. The key idea consists in progressively extending pre-detected line-segments until they meet each other. Our experiments demonstrate that output partitions both contain less polygons and better capture geometric structures than those delivered by existing methods. We also show the applicative potential of the method when used as preprocessing in object contouring.
Converting point clouds into concise polygonal meshes in an automated manner is an enduring problem in Computer Graphics. Prior work, which typically operate by assembling planar shapes detected from input points, largely overlooked the scalability issue of processing a large number of shapes. As a result, they tend to produce overly simplified meshes with assembling approaches that can hardly digest more than one hundred shapes in practice. We propose a shape assembling mechanism which is at least one order magnitude more efficient, both in time and in number of processed shapes. Our key idea relies upon the design of a kinetic data structure for partitioning the space into convex polyhedra. Instead of slicing all the planar shapes exhaustively as prior methods, we create a partition where shapes grow at constant speed until colliding and forming polyhedra. This simple idea produces a lighter yet meaningful partition with a lower algorithmic complexity than an exhaustive partition. A watertight polygonal mesh is then extracted from the partition with a min-cut formulation. We demonstrate the robustness and efficacy of our algorithm on a variety of objects and scenes in terms of complexity, size and acquisition characteristics. In particular, we show the method can both faithfully represent piecewise planar structures and approximating freeform objects while offering high resilience to occlusions and missing data.
<p><strong>Abstract.</strong> We introduce a pipeline that reconstructs buildings of urban environments as concise polygonal meshes from airborne LiDAR scans. It consists of three main steps: classification, building contouring, and building reconstruction, the two last steps being achieved using computational geometry tools. Our algorithm demonstrates its robustness, flexibility and scalability by producing accurate and compact 3D models over large and varied urban areas in a few minutes only.</p>
Learning-based approaches are now typically used to extract building rooftop from overhead imagery. However, converting boundaries of segmented objects from raster format to vector coordinates remains a challenging problem. Using recent advances in multi-task learning, we propose a fast and scalable approach, based on a polygonal partitioning of the space and discrete optimization, to deliver accurate and simple vectorized building rooftops, that are compared to those produced by state-of-the-art techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.