Epilepsy is a common neurological disorder that substantially deteriorates patients' safety and quality of life. Electroencephalogram (EEG) has been the golden-standard technique for diagnosing this brain disorder and has played an essential role in epilepsy monitoring and disease management. It is extremely laborious and challenging, if not practical, for physicians and expert humans to annotate all recorded signals, particularly in long-term monitoring. The annotation process often involves identifying signal segments with suspected epileptic seizure features or other abnormalities and/or known healthy features. Therefore, automated epilepsy detection becomes a key clinical need because it can greatly improve clinical practice's efficiency and free up human expert time to attend to other important tasks. Current automated seizure detection algorithms generally face two challenges: (1) models trained for specific patients, but such models are patient-specific, hence fail to generalize to other patients and real-world situations; (2) seizure detection models trained on large EEG datasets have low sensitivity and/or high false positive rates, often with an area under the receiver operating characteristic (AUROC) that is not high enough for potential clinical applicability. This paper proposes Transformers for Seizure Detection, which we refer to as TSD in this manuscript. A Transformer is a deep learning architecture based on an encoder-decoder structure and on attention mechanisms, which we apply to recorded brain signals. The AUROC of our proposed model has achieved 92.1%, tested with Temple University's publically available electroencephalogram (EEG) seizure corpus dataset (TUH). Additionally, we highlight the impact of input domains on the model's performance. Specifically, TSD performs best in identifying epileptic seizures when the input domain is a time-frequency. Finally, our proposed model for seizure detection in inference-only mode with EEG recordings shows outstanding performance in classifying seizure types and superior model initialization.
Recent advances in Large Language Models (LLMs) have shown great potential in various domains, particularly in processing text-based data. However, their applicability to biomedical time-series signals (e.g. electrograms) remains largely unexplored due to the lack of a signal-to-text (sequence) engine to harness the power of LLMs. The application of biosignals has been growing due to the improvements in the reliability, noise and performance of front-end sensing, and back-end signal processing, despite lowering the number of sensing components (e.g. electrodes) needed for effective and long-term use (e.g. in wearable or implantable devices). One of the most reliable techniques used in clinical settings is producing a technical/clinical report on the quality and features of collected data and using that alongside a set of auxiliary or complementary data (e.g. imaging, blood tests, medical records). This work addresses the missing puzzle in implementing conversational artificial intelligence (AI), a reliable, technical and clinically relevant signal-to-text (Sig2Txt) engine. While medical foundation models can be expected, reports of Sig2Txt engine in large scale can be utilised in years to come to develop foundational models for a unified purpose. In this work, we propose a system (SignalGPT or BioSignal Copilot) that reduces medical signals to a freestyle or formatted clinical, technical report close to a brief clinical report capturing key features and characterisation of input signal. In its ideal form, this system provides the tool necessary to produce the technical input sequence necessary for LLMs as a step toward using AI in the medical and clinical domains as an assistant to clinicians and patients. To the best of our knowledge, this is the first system for bioSig2Txt generation, and the idea can be used in other domains as well to produce technical reports to harness the power of LLMs. This method also improves the interpretability and tracking (history) of information into and out of the AI models. We did implement this aspect through a buffer in our system. As a preliminary step, we verify the feasibility of the BioSignal Copilot (SignalGPT) using a clinical ECG dataset to demonstrate the advantages of the proposed system. In this feasibility study, we used prompts and fine-tuning to prevent fluctuations in response. The combination of biosignal processing and natural language processing offers a promising solution that improves the interpretability of the results obtained from AI, which also leverages the rapid growth of LLMs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.