Figure 1: 3DPeople Dataset. We present a synthetic dataset with 2.5 Million frames of 80 subjects (40 female/40 male) performing 70 different actions. The dataset contains a large range of distinct body shapes, skin tones and clothing outfits, and provides 640 × 480 RGB images under different viewpoints, 3D geometry of the body and clothing, 3D skeletons, depth maps, optical flow and semantic information (body parts and cloth labels). In this paper we use the 3DPeople dataset to model the geometry of dressed humans. AbstractRecent advances in 3D human shape estimation build upon parametric representations that model very well the shape of the naked body, but are not appropriate to represent the clothing geometry. In this paper, we present an approach to model dressed humans and predict their geometry from single images. We contribute in three fundamental aspects of the problem, namely, a new dataset, a novel shape parameterization algorithm and an end-to-end deep generative network for predicting shape.First, we present 3DPeople, a large-scale synthetic dataset with 2.5 Million photo-realistic images of 80 subjects performing 70 activities and wearing diverse outfits. Besides providing textured 3D meshes for clothes and body, we annotate the dataset with segmentation masks, skeletons, depth, normal maps and optical flow. All this together makes 3DPeople suitable for a plethora of tasks.We then represent the 3D shapes using 2D geometry im-ages. To build these images we propose a novel spherical area-preserving parameterization algorithm based on the optimal mass transportation method. We show this approach to improve existing spherical maps which tend to shrink the elongated parts of the full body models such as the arms and legs, making the geometry images incomplete.Finally, we design a multi-resolution deep generative network that, given an input image of a dressed human, predicts his/her geometry image (and thus the clothed body shape) in an end-to-end manner. We obtain very promising results in jointly capturing body pose and clothing shape, both for synthetic validation and on the wild images.
One of the major challenges that faces today regulatory risk assessment is to speed up the way of assessing threshold sublethal detrimental effects of existing and new chemical products. Recently advances in imaging allows to monitor in real time the behaviour of individuals under a given stress. Light is a common stress for many different organisms. Fish larvae and many invertebrate species respond to light altering their behaviour. The water flea Daphnia magna as many other zooplanktonic species has a marked diel vertical phototactic swimming behaviour against light due to fish predation. The aim of this study was to develop a highthroughput image analysis to study changes in the vertical swimming behaviour to light of D. magna first reproductive adult females exposed to 0.1 and 1 µg/L of four psychiatric drugs: diazepam, fluoxetine, propranolol and carbamazepine during their entire life. Experiments were conducted using a new custom designed vertical oriented four 50 mL chamber device controlled by the Noldus software (Netherlands). Changes in speed, preferred area (bottom vs upper areas) and animal aggregation were analysed using groups of animals under consecutive periods of dark and apical light stimulus of different intensities. Obtained results indicated that light intensity increased the speed but low light intensities allowed to better discriminate individual responses to the studied drugs. The four tested drugs decreased the response of exposed organisms to light: individuals move less, were closer to the bottom and at low light intensities were closer each other. At high light intensities, however, exposed individuals were less aggregated. Propranolol, carbamazepine and fluoxetine were the compounds effecting most the behaviour. Our results indicated that psychiatric drugs at environmental relevant concentrations alter the vertical phototactic behaviour of D. magna individuals and that it is possible to develop appropriate high-throughput image analysis devices to measure those responses.
This paper describes a method to localize faces in color images based on the fusion of the information gathered from a stereo vision system and the analysis of color images. Our method generates a depth map of the scene and tries tof;t a head model taking into account the shape of the model and skin color information. The method is tailored for its use in factory automation applications where the detection and localization of humans is necessaly for the completion or interruption of a particular task, such as robot manipulator safety or the interaction of service robots with humans.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.