Abstract. This paper presents a novel approach for multi-organ (musculoskeletal system) automatic registration and segmentation from clinical MRI datasets, based on discrete deformable models (simplex meshes). We reduce the computational complexity using multi-resolution forces, multi-resolution hierarchical collision handling and large simulation time steps (implicit integration scheme), allowing real-time user control and cost-efficient segmentation. Radial forces and topological constraints (attachments) are applied to regularize the segmentation process. Based on a medial axis constrained approximation, we efficiently characterize shapes and deformations. We validate our methods for the hip joint and the thigh (20 muscles, 4 bones) on 4 datasets: average error=1.5mm, computation time=15min.
Exposure to solar ultraviolet (UV) light is the main causative factor for skin cancer. UV exposure depends on environmental and individual factors. Individual exposure data remain scarce and development of alternative assessment methods is greatly needed. We developed a model simulating human exposure to solar UV. The model predicts the dose and distribution of UV exposure received on the basis of ground irradiation and morphological data. Standard 3D computer graphics techniques were adapted to develop a rendering engine that estimates the solar exposure of a virtual manikin depicted as a triangle mesh surface. The amount of solar energy received by each triangle was calculated, taking into account reflected, direct and diffuse radiation, and shading from other body parts. Dosimetric measurements (n = 54) were conducted in field conditions using a foam manikin as surrogate for an exposed individual. Dosimetric results were compared to the model predictions. The model predicted exposure to solar UV adequately. The symmetric mean absolute percentage error was 13%. Half of the predictions were within 17% range of the measurements. This model provides a tool to assess outdoor occupational and recreational UV exposures, without necessitating time-consuming individual dosimetry, with numerous potential uses in skin cancer prevention and research.
No abstract
R ecent innovations in interactive digital television 1 and multimedia products have enhanced viewers' ability to interact with programs and therefore to individualize their viewing experience. Designers for such applications need systems that provide the capability of immersing real-time simulated humans in games, multimedia titles, and film animations. The ability to place the viewer in a dramatic situation created by the behavior of other, simulated digital actors will add a new dimension to existing simulation-based products for education and entertainment on interactive TV. In the games market, convincing simulated humans rejuvenate existing games and enable the production of new kinds of games. Finally, in virtual reality (VR), representing participants by a virtual actor-self-representation in the virtual world-is an important factor for a sense of presence. This becomes even more important in multiuser environments, where effective interaction among participants contributes to the sense of presence. Even with limited sensor information, you can construct a virtual human frame in the virtual world that reflects the real body's activities. Slater and Usoh 2 indicated that such a body, even if crude, heightens the sense of presence. We have been working on simulating virtual humans for several years. Until recently, these constructs could not act in real time. Today, however, many applications need to simulate in real time virtual humans that look realistic. We have invested considerable effort in developing and integrating several modules into a system capable of animating humans in real-time situations. This includes interactive modules for building realistic individuals and a texture-fitting method suitable for all parts of the head and body. Animating the body, including the hands and their deformations, is the key aspect of our system; to our knowledge, no competing system integrates all these functions. We also included facial animation, as demonstrated below with virtual tennis players. Of course, real-time simulation has a price, demanding compromises. Table 1 compares the methods used for both types of actors, frame-by-frame and real-time. Real-time virtual-human simulation environments must achieve a close relationship between modeling and animation. In other words, virtual human modeling must include the structure needed for virtual human animation. We can separate the complete process broadly into three units: modeling, deformation, and motion control. We have developed a single system containing all the modules needed for simulating real-time virtual humans in distant virtual environments (VEs). Our system lets us rapidly clone any individual and animate the clone in various contexts. People cannot mistake our virtual humans for real ones, but we think them recognizable and realistic, as shown in the two case studies described later. We must also distinguish our approach from others. We simulate existing people. Compare this to Perlin's scripted virtual actors 3 or to virtual characters in games...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.