This thesis proposes a multi-camera active-vision reconfiguration system which selects camera poses online to improve the shape recovery of a priori unknown, markerless, deforming objects in dynamic environments. The objectives of shape recovery are defined as surface sampling accuracy, and shape completeness. The completeness objective is generalized for both solid and surface-based objects as the maximization of surface visibility. Thus, improving the recovered shape of target objects is shown to be analogous to maximizing the surface area visibility. This improvement is achieved through the on-line reconfiguration of multiple cameras. The system developed herein is composed of a shape recovery method, a robust tracking algorithm, and a multi-camera reconfiguration method.The shape recovery method is based on a modular fusion technique that produces a complete 3D mesh-model of the target object. The method fuses triangulation data with a visual hull to maximize recovery accuracy. The modularity of the method lies in the ability to modify data filtering techniques to improve modelling accuracy, and interchange stereo correspondence features. The adaptive particle filtering algorithm produces a deformation estimation of the recovered model from the tracking data. The algorithm automatically adapts to the quantity of tracking data available and changes in the object's dynamics. The modularity of the algorithm allows modifications in terms of the number of particles and motion models for task-specific implementations. The reconfiguration method consists of a robust stereovisibility objective function, workspace discretization, and a path planner. The complete system can iii improve the shape recovery of a priori unknown deforming objects in obstacle-laden environments when compared to static camera methods.To validate the proposed system, extensive simulations and experiments were conducted. The simulations tested the system in comparison to static multi-camera systems, and ideal camera placement where the object model was a priori known and the camera's dynamics were unconstrained. Simulation results showed the proposed methodology outperformed static cameras and approached the performance of ideal camera placement in a dynamic, obstacle-laden environment. The experimental results showed similar improvement in shape recovery when comparison to a static camera system in an obstacle-laden environment.iv
A novel methodology is proposed herein to estimate the three-dimensional (3D) surface shape of unknown, markerless deforming objects through a modular multi-camera vision system. The methodology is a generalized formal approach to shape estimation for a priori unknown objects. Accurate shape estimation is accomplished through a robust, adaptive particle filtering process. The estimation process yields a set of surface meshes representing the expected deformation of the target object. The methodology is based on the use of a multi-camera system, with a variable number of cameras, and range of object motions. The numerous simulations and experiments presented herein demonstrate the proposed methodology's ability to accurately estimate the surface deformation of unknown objects, as well as its robustness to object loss under self-occlusion, and varying motion dynamics.Typical motion capture methods utilize an articulated object model (i.e., a skeleton model) that is fit to the recovered 3D data [34][35][36][37][38]. The deformation of articulated objects is defined as the change in pose and orientation of the articulated links in the object. The accuracy of these methods is quantified by the angular joint error between the recovered model and ground truth. Markered motion capture methods yield higher-resolution shape recovery compared to articulated-object based methods, but depend on engineered surface features [3,39]. Several markerless motion-capture methods depend on off-line user-assisted processing for model generation [40][41][42]. High-resolution motion-capture methods fit a known mesh model to the capture data to improve accuracy [10,43,44]. Similarly, the known object model and material properties can be used to further improve the shape recovery [45]. The visual hull technique can create a movie-strip motion capture sequence of objects [8,[46][47][48] or estimate the 3D background by removing the dynamic objects in the scene [49]. All these methods require some combination of off-line processing, a priori known models, and constrained workspaces to produce a collection of models at each demand instant resulting in a movie-strip representation. Deformation estimation, however, is absent.Multi-camera deformation-estimation methods commonly implement either a Kalman filter (KF) [50-53], particle filter [54], or particle swarm optimization (PSO) [55] to track an object in a motion-capture sequence. Articulated-motion deformation prediction methods rely on a skeleton model of the target object. KFs were successfully implemented to estimate joint deformation for consecutive demand instants [2,13]. PSO-base methods were also shown to be successful in deformation estimation for articulated objects [56,57]. Mesh models combined with a KF tracking process produce greater surface accuracy for deformation estimation [3,14]. Patch-based methods track independent surface patches through an extended Kalman filter (EKF) [58], and a particle filter [59], producing deformation estimations of each patch. Many active-vision ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.