Background: Diagnosis of benign paroxysmal positional vertigo (BPPV) depends on the accurate interpretation of nystagmus induced by positional tests. However, difficulties in interpreting eye-movement often can arise in primary care practice or emergency room. We hypothesized that the use of machine learning would be helpful for the interpretation. Methods: From our clinical data warehouse, 91,778 nystagmus videos from 3467 patients with dizziness were obtained, in which the three-dimensional movement of nystagmus was annotated by four otologic experts. From each labeled video, 30 features changed into 255 grid images fed into the input layer of the neural network for the training dataset. For the model validation, video dataset of 3566 horizontal, 2068 vertical, and 720 torsional movements from 1005 patients with BPPV were collected. Results: The model had a sensitivity and specificity of 0.910 ± 0.036 and 0.919 ± 0.032 for horizontal nystagmus; of 0.879 ± 0.029 and 0.894 ± 0.025 for vertical nystagmus; and of 0.783 ± 0.040 and 0.799 ± 0.038 for torsional nystagmus, respectively. The affected canal was predicted with a sensitivity of 0.806 ± 0.010 and a specificity of 0.971 ± 0.003. Conclusions: As our deep-learning model had high sensitivity and specificity for the classification of nystagmus and localization of affected canal in patients with BPPV, it may have wide clinical applicability.
ObjectivesEven though vestibular rehabilitation therapy (VRT) using head-mounted display (HMD) has been highlighted recently as a popular virtual reality platform, we should consider that HMD itself do not provide interactive environment for VRT. This study aimed to test the feasibility of interactive components using eye tracking assisted strategy through neurophysiologic evidence.MethodsHMD implemented with an infrared-based eye tracker was used to generate a virtual environment for VRT. Eighteen healthy subjects participated in our experiment, wherein they performed a saccadic eye exercise (SEE) under two conditions of feedback-on (F-on, visualization of eye position) and feedback-off (F-off, non-visualization of eye position). Eye position was continuously monitored in real time on those two conditions, but this information was not provided to the participants. Electroencephalogram recordings were used to estimate neural dynamics and attention during SEE, in which only valid trials (correct responses) were included in electroencephalogram analysis.ResultsSEE accuracy was higher in the F-on than F-off condition (P=0.039). The power spectral density of beta band was higher in the F-on condition on the frontal (P=0.047), central (P=0.042), and occipital areas (P=0.045). Beta–event-related desynchronization was significantly more pronounced in the F-on (–0.19 on frontal and –0.22 on central clusters) than in the F-off condition (0.23 on frontal and 0.05 on central) on preparatory phase (P=0.005 for frontal and P=0.024 for central). In addition, more abundant functional connectivity was revealed under the F-on condition.ConclusionConsidering substantial gain may come from goal directed attention and activation of brain-network while performing VRT, our preclinical study from SEE suggests that eye tracking algorithms may work efficiently in vestibular rehabilitation using HMD.
This result pattern suggests that a moderate amount of auditory training using the mobile device with cost-effective and minimal supervision is useful when it is used to improve the speech understanding of older adults with hearing loss. Geriatr Gerontol Int 2017; 17: 61-68.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.