This paper introduces a framework for real-time simulation and rendering of crowds navigating in a virtual environment. The solution first consists in a specific environment preprocessing technique giving rise to navigation graphs, which are then used by the navigation and simulation tasks. Second, navigation planning interactively provides various solutions to the user queries, allowing to spread a crowd by individualizing trajectories. A scalable simulation model enables the management of large crowds, while saving computation time for rendering tasks. Pedestrian graphical models are divided into three rendering fidelities ranging from billboards to dynamic meshes, allowing close-up views of detailed digital actors with a large variety of locomotion animations. Examples illustrate our method in several environments with crowds of up to 35 000 pedestrians with real-time performance.
BackgroundProducing a rich, personalized Web-based consultation tool for plastic surgeons and patients is challenging.Objective(1) To develop a computer tool that allows individual reconstruction and simulation of 3-dimensional (3D) soft tissue from ordinary digital photos of breasts, (2) to implement a Web-based, worldwide-accessible preoperative surgical planning platform for plastic surgeons, and (3) to validate this tool through a quality control analysis by comparing 3D laser scans of the patients with the 3D reconstructions with this tool from original 2-dimensional (2D) pictures of the same patients.MethodsThe proposed system uses well-established 2D digital photos for reconstruction into a 3D torso, which is then available to the user for interactive planning. The simulation is performed on dedicated servers, accessible via Internet. It allows the surgeon, together with the patient, to previsualize the impact of the proposed breast augmentation directly during the consultation before a surgery is decided upon. We retrospectively conduced a quality control assessment of available anonymized pre- and postoperative 2D digital photographs of patients undergoing breast augmentation procedures. The method presented above was used to reconstruct 3D pictures from 2D digital pictures. We used a laser scanner capable of generating a highly accurate surface model of the patient’s anatomy to acquire ground truth data. The quality of the computed 3D reconstructions was compared with the ground truth data used to perform both qualitative and quantitative evaluations.ResultsWe evaluated the system on 11 clinical cases for surface reconstructions and 4 clinical cases of postoperative simulations, using laser surface scan technologies showing a mean reconstruction error between 2 and 4 mm and a maximum outlier error of 16 mm. Qualitative and quantitative analyses from plastic surgeons demonstrate the potential of these new emerging technologies.ConclusionsWe tested our tool for 3D, Web-based, patient-specific consultation in the clinical scenario of breast augmentation. This example shows that the current state of development allows for creation of responsive and effective Web-based, 3D medical tools, even with highly complex and time-consuming computation, by off-loading them to a dedicated high-performance data center. The efficient combination of advanced technologies, based on analysis and understanding of human anatomy and physiology, will allow the development of further Web-based reconstruction and predictive interfaces at different scales of the human body. The consultation tool presented herein exemplifies the potential of combining advancements in the core areas of computer science and biomedical engineering with the evolving areas of Web technologies. We are confident that future developments based on a multidisciplinary approach will further pave the way toward personalized Web-enabled medicine.
Recent advances in computer graphics techniques and increasing power of graphics hardware made it possible to display and animate large crowds in real-time. Most of the research efforts have been directed towards improving rendering or behavior control; the question how to author crowd scenes in an efficient way is usually not addressed. We introduce a novel approach to create complex scenes involving thousands of animated individuals in a simple and intuitive way. By employing a brush metaphor, analogous to the tools used in image manipulation programs, we can distribute, modify and control crowd members in real-time with immediate visual feedback. We define concepts of operators and instance properties that allow to create and manage variety in populations of virtual humans. An efficient technique allowing to render up to several thousands of fully three-dimensional polygonal characters with keyframed animations at interactive framerates is presented. The potential of our approach is demonstrated by authoring a scenario of a virtual audience in a theater and a scenario of a pedestrian crowd in a city.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.