With the rise of affordable processing power and off-theshelf apparatus supporting 3D imaging, there is a growing need for reliable and fast calibration tools, enabling timely accurate data gathering. When confronted with a choice of camera calibration tools, Zhang's and Tsai's are not only the most cited, but also the most widely available solutions. Zhang's calibration is often chosen by default, based on the assumption that it is more accurate. However, it typically involves extensive manual data gathering when compared to the Tsai approach. Here, we demonstrate that there is no significant accuracy gain between Tsai's or Zhang's approach in terms of stereo matching, given the variety of readily available 3D devices tested. Further to this, the trade-off between measurement accuracy compared to setup and data acquisition time is decisively in favour of Tsai. This paper also covers a new algorithm for the extraction of points from images of checkboards attached to calibration objects.
This work presents a robust, and low-cost framework for real-time marker based 3-D human expression modeling using off-the-shelf stereo web-cameras and inexpensive adhesive markers applied to the face. The system has low computational requirements, runs on standard hardware, and is portable with minimal setup time and no training. It does not require a controlled lab environment (lighting or setup) and is robust under varying conditions, i.e. illumination, facial hair, or skin tone variation. Stereo web-cameras perform 3-D marker tracking to obtain head rigid motion and the non-rigid motion of expressions. Tracked markers are then mapped onto a 3-D face model with a virtual muscle animation system. Muscle inverse kinematics update muscle contraction parameters based on marker motion in order to create a virtual character's expression performance. The parametrization of the muscle-based animation encodes a face performance with little bandwidth. Additionally, a radial basis function mapping approach was used to easily remap motion capture data to any face model. In this way the automated creation of a personalized 3-D face model and animation system from 3-D data is elucidated. The expressive power of the system and its ability to recognize new expressions was evaluated on a group of test subjects with respect to the six universally recognized facial expressions. Results show that the use of abstract muscle definition reduces the effect of potential noise in the motion capture data and allows the seamless animation of any virtual anthropomorphic face model with data acquired through human face performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.