OBJECTIVE The aim of this study was to assess the precision and feasibility of 3D-printed marker–based augmented reality (AR) neurosurgical navigation and its use intraoperatively compared with optical tracking neuronavigation systems (OTNSs). METHODS Three-dimensional–printed markers for CT and MRI and intraoperative use were applied with mobile devices using an AR light detection and ranging (LIDAR) camera. The 3D segmentations of intracranial tumors were created with CT and MR images, and preoperative registration of the marker and pathology was performed. A patient-specific, surgeon-facilitated mobile application was developed, and a mobile device camera was used for neuronavigation with high accuracy, ease, and cost-effectiveness. After accuracy values were preliminarily assessed, this technique was used intraoperatively in 8 patients. RESULTS The mobile device LIDAR camera was found to successfully overlay images of virtual tumor segmentations according to the position of a 3D-printed marker. The targeting error that was measured ranged from 0.5 to 3.5 mm (mean 1.70 ± 1.02 mm, median 1.58 mm). The mean preoperative preparation time was 35.7 ± 5.56 minutes, which is longer than that for routine OTNSs, but the amount of time required for preoperative registration and the placement of the intraoperative marker was very brief compared with other neurosurgical navigation systems (mean 1.02 ± 0.3 minutes). CONCLUSIONS The 3D-printed marker–based AR neuronavigation system was a clinically feasible, highly precise, low-cost, and easy-to-use navigation technique. Three-dimensional segmentation of intracranial tumors was targeted on the brain and was clearly visualized from the skin incision to the end of surgery.
This study assessed the effect of an easily perceived real-time visual feedback method on touchscreen typing accuracy. Thirty subjects were asked to hold a smartphone with a capacitive touchscreen in one hand and enter a text using the thumb of the same hand via a custom designed virtual keyboard. There were two types of text entry sessions: with or without visual feedback. The visual feedback consisted of a full-screen crosshair, representing the accurate coordinate of touch in real time. In each session, touch-down time on the virtual keyboard and touch coordinates were recorded for every touch action. Two types of typing errors were defined: 1) centering error (CE), which was calculated as the mm distance between the coordinate of the touch and the center of the key, and 2) incorrect entry (IE), which was the number of missed keys. Student t-tests and Wilcoxon tests were used for mean and mean-rank comparisons of CE and IE, respectively. The results showed that visual feedback decreased CE (mean ± SD) significantly from 1.34 ± 0.38 mm to 0.85 ± 0.24 mm (P < 0.0005), and decreased IE (median and range, # of incorrect entries) significantly from 5.50 and 32.00 to 1.00 and 7.00 (P < 0.005). In conclusion, the accurate, easily perceived, and 2D real-time feedback decreases touch-typing error rates markedly and therefore can be of practical importance for increasing the productivity of smartphone users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.