The authors present a method to interconnect the Visualisation Toolkit (VTK) and Unity. This integration enables them to exploit the visualisation capabilities of VTK with Unity's widespread support of virtual, augmented, and mixed reality displays, and interaction and manipulation devices, for the development of medical image applications for virtual environments. The proposed method utilises OpenGL context sharing between Unity and VTK to render VTK objects into the Unity scene via a Unity native plugin. The proposed method is demonstrated in a simple Unity application that performs VTK volume rendering to display thoracic computed tomography and cardiac magnetic resonance images. Quantitative measurements of the achieved frame rates show that this approach provides over 90 fps using standard hardware, which is suitable for current augmented reality/virtual reality display devices.
Objective: Advances in artificial intelligence (AI) have demonstrated potential to improve medical diagnosis. We piloted the end-to-end automation of the midtrimester screening ultrasound scan using AI-enabled tools. Methods:A prospective method comparison study was conducted. Participants had both standard and AI-assisted US scans performed. The AI tools automated image acquisition, biometric measurement, and report production. A feedback survey captured the sonographers' perceptions of scanning.Results: Twenty-three subjects were studied. The average time saving per scan was 7.62 min (34.7%) with the AI-assisted method (p < 0.0001). There was no difference in reporting time. There were no clinically significant differences in biometric measurements between the two methods. The AI tools saved a satisfactory view in 93% of the cases (four core views only), and 73% for the full 13 views, compared to 98% for both using the manual scan. Survey responses suggest that the AI tools helped sonographers to concentrate on image interpretation by removing disruptive tasks. Conclusion:Separating freehand scanning from image capture and measurement resulted in a faster scan and altered workflow. Removing repetitive tasks may allow more attention to be directed identifying fetal malformation. Further work is required to improve the image plane detection algorithm for use in real time.
Objectives: To investigate how virtual reality (VR) imaging impacts decision-making in atrioventricular valve surgery.Methods: This was a single-center retrospective study involving 15 children and adolescents, median age 6 years (range, 0.33-16) requiring surgical repair of the atrioventricular valves between the years 2016 and 2019. The patients' preoperative 3-dimesnional (3D) echocardiographic data were used to create 3D visualization in a VR application. Five pediatric cardiothoracic surgeons completed a questionnaire formulated to compare their surgical decisions regarding the cases after reviewing conventionally presented 2-dimesnional and 3D echocardiographic images and again after visualization of 3D echocardiograms using the VR platform. Finally, intraoperative findings were shared with surgeons to confirm assessment of the pathology.Results: In 67% of cases presented with VR, surgeons reported having "more" or "much more" confidence in their understanding of each patient's pathology and their surgical approach. In all but one case, surgeons were at least as confident after reviewing the VR compared with standard imaging. The case where surgeons reported to be least confident on VR had the worst technical quality of data used. After viewing patient cases on VR, surgeons reported that they would have made minor modifications to surgical approach in 53% and major modifications in 7% of cases. Conclusions:The main impact of viewing imaging on VR is the improved clarity of the anatomical structures. Surgeons reported that this would have impacted the surgical approach in the majority of cases. Poor-quality 3D echocardiographic data were associated with a negative impact of VR visualization; thus. quality assessment of imaging is necessary before projecting in a VR format. (JTCVS Techniques 2021;7:269-77)Virtual reality imaging for 3D echocardiography in use. CENTRAL MESSAGEVirtual reality dynamic 3-dimensional echocardiographic imaging improves surgical insight for atrioventricular valve repair planning in congenital heart disease for clinical use. PERSPECTIVEThis study demonstrates the potential clinical benefits and value of virtual reality in surgical planning for congenital heart disease and other structural heart defects. The observed benefits are improved user interaction and visualization of valve apparatus in a beating heart compared with image visualization using standard techniques.See Commentary on page 278.
We present a novel divergence free mixture model for multiphase flows and the related fluid‐solid coupling. The new mixture model is built upon a volume‐weighted mixture velocity so that the divergence free condition is satisfied for miscible and immiscible multiphase fluids. The proposed mixture velocity can be solved efficiently by adapted single phase incompressible solvers, allowing for larger time steps and smaller volume deviations. Besides, the drift velocity formulation is corrected to ensure mass conservation during the simulation. The new approach increases the accuracy of multiphase fluid simulation by several orders. The capability of the new divergence‐free mixture model is demonstrated by simulating different multiphase flow phenomena including mixing and unmixing of multiple fluids, fluid‐solid coupling involving deformable solids and granular materials.
The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.