Abstract. While many visualization tools exist that offer sophisticated functions for charting complex data, they still expect users to possess a high degree of expertise in wielding the tools to create an effective visualization. This paper presents Articulate, an attempt at a semi-automated visual analytic model that is guided by a conversational user interface to allow users to verbally describe and then manipulate what they want to see. We use natural language processing and machine learning methods to translate the imprecise sentences into explicit expressions, and then apply a heuristic graph generation algorithm to create a suitable visualization. The goal is to relieve the user of the burden of having to learn a complex user-interface in order to craft a visualization.
Objective To test the hypothesis that HANDS “big picture summary” can be implemented uniformly across diverse settings and result in positive RN and plan of care (POC) data outcomes across time. Design In a longitudinal, multi-site, full test design, a representative convenience sample of 8 medical-surgical units from 4 hospitals (1 university, 2 large community, and 1 small community) in one Midwestern state implemented the HANDS intervention for 24 (4 units) or 12 (4 units) months. Measurements 1) RN outcomes - percentage completing training, satisfaction with standardized terminologies, perception of HANDS usefulness, POC submission compliance rate. 2) POC data outcomes – validity (rate of optional changes/episode); reliability of terms and ratings; and volume of standardized data generated. Results 100% of the RNs who worked on the 8 study units successfully completed the required standardized training; all units selected participated for the entire 12- or 24-month designated period; compliance rates for POC entry at every patient handoff were 78% to 92%; reliability coefficients for use of the standardized terms and ratings were moderately strong; the pattern of optional POC change per episode declined but remained reasonable across time; the nurses generated a database of 40,747 episodes of care. Limitations Only RNs and medical-surgical units participated. Conclusion It is possible to effectively standardize the capture and visualization of useful “big picture” healthcare information across diverse settings. Findings offer a viable alternative to the current practice of introducing new health information layers that ultimately increase the complexity and inconsistency of information for front line users.
Background:While there is literature suggesting that the palatal rugae could be used for human identification, most of these studies use two-dimensional (2D) approach.Aim:The aims of this study were to evaluate palatal ruga patterns using three-dimensional (3D) digital models; compare the most clinically relevant digital model conversion techniques for identification of the palatal rugae; develop a protocol for overlay registration; determine changes in palatal ruga individual patterns through time; and investigate the efficiency and accuracy of 3D matching processes between different individuals’ patterns.Material and Methods:Five cross sections in the anteroposterior dimension and four cross sections in the transverse dimension were computed which generated 18 2D variables. In addition, 13 3D variables were defined: The posterior point of incisive papilla (IP), and the most medial and lateral end points of the palatal rugae (R1MR, R1ML, R1LR, R1LL, R2MR, R2ML, R2LR, R2LL, R3MR, R3ML, R3LR, and R3LL). The deviation magnitude for each variable was statistically analyzed in this study. Five different data sets with the same 31 landmarks were evaluated in this study.Results:The results demonstrated that 2D images and linear measurements in the anteroposterior and transverse dimensions were not sufficient for comparing different digital model conversion techniques using the palatal rugae. 3D digital models proved to be a highly effective tool in evaluating different palatal ruga patterns. The 3D landmarks showed no statistically significant mean differences over time or as a result of orthodontic treatment. No statistically significant mean differences were found between different digital model conversion techniques, that is, between OrthoCAD™ and Ortho Insight 3D™, and between Ortho Insight 3D™ and the iTero® scans, when using 12 3D palatal rugae landmarks for comparison.Conclusion:Although 12 palatal 3D landmarks could be used for human identification, certain landmarks were especially important in the matching process and were arranged by strength and importance. Proposed values for 3D palatal landmarks were introduced that could be useful in biometrics and forensic odontology for the verification of human identity.
The CAVE, a walk-in virtual reality environment typically consisting of 4–6 3 m-by-3 m sides of a room made of rear-projected screens, was first conceived and built in 1991. In the nearly two decades since its conception, the supporting technology has improved so that current CAVEs are much brighter, at much higher resolution, and have dramatically improved graphics performance. However, rear-projection-based CAVEs typically must be housed in a 10 m-by-10 m-by-10 m room (allowing space behind the screen walls for the projectors), which limits their deployment to large spaces. The CAVE of the future will be made of tessellated panel displays, eliminating the projection distance, but the implementation of such displays is challenging. Early multi-tile, panel-based, virtual-reality displays have been designed, prototyped, and built for the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia by researchers at the University of California, San Diego, and the University of Illinois at Chicago. New means of image generation and control are considered key contributions to the future viability of the CAVE as a virtual-reality device.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.