Objective
Tracking seizures is crucial for epilepsy monitoring and treatment evaluation. Current epilepsy care relies on caretaker seizure diaries, but clinical seizure monitoring may miss seizures. Wearable devices may be better tolerated and more suitable for long‐term ambulatory monitoring. This study evaluates the seizure detection performance of custom‐developed machine learning (ML) algorithms across a broad spectrum of epileptic seizures utilizing wrist‐ and ankle‐worn multisignal biosensors.
Methods
We enrolled patients admitted to the epilepsy monitoring unit and asked them to wear a wearable sensor on either their wrists or ankles. The sensor recorded body temperature, electrodermal activity, accelerometry (ACC), and photoplethysmography, which provides blood volume pulse (BVP). We used electroencephalographic seizure onset and offset as determined by a board‐certified epileptologist as a standard comparison. We trained and validated ML for two different algorithms: Algorithm 1, ML methods for developing seizure type‐specific detection models for nine individual seizure types; and Algorithm 2, ML methods for building general seizure type‐agnostic detection, lumping together all seizure types.
Results
We included 94 patients (57.4% female, median age = 9.9 years) and 548 epileptic seizures (11 066 h of sensor data) for a total of 930 seizures and nine seizure types. Algorithm 1 detected eight of nine seizure types better than chance (area under the receiver operating characteristic curve [AUC‐ROC] = .648–.976). Algorithm 2 detected all nine seizure types better than chance (AUC‐ROC = .642–.995); a fusion of ACC and BVP modalities achieved the best AUC‐ROC (.752) when combining all seizure types together.
Significance
Automatic seizure detection using ML from multimodal wearable sensor data is feasible across a broad spectrum of epileptic seizures. Preliminary results show better than chance seizure detection. The next steps include validation of our results in larger datasets, evaluation of the detection utility tool for additional clinical seizure types, and integration of additional clinical information.
Recent research on grasp detection has focused on improving accuracy through deep CNN models, but at the cost of large memory and computational resources. In this paper, we propose an efficient CNN architecture which produces high grasp detection accuracy in real-time while maintaining a compact model design. To achieve this, we introduce a CNN architecture termed GraspNet which has two main branches: i) An encoder branch which downsamples an input image using our novel Dilated Dense Fire (DDF) modules - squeeze and dilated convolutions with dense residual connections. ii) A decoder branch which upsamples the output of the encoder branch to the original image size using deconvolutions and fuse connections. We evaluated GraspNet for grasp detection using offline datasets and a real-world robotic grasping setup. In experiments, we show that GraspNet achieves superior grasp detection accuracy compared to the stateof-the-art computation-efficient CNN models with real-time inference speed on embedded GPU hardware (Nvidia Jetson TX1), making it suitable for low-powered devices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.