In recent years, deep learning has shown very competitive performance in seizure detection. However, most of the currently used methods either convert electroencephalogram (EEG) signals into spectral images and employ 2D-CNNs, or split the one-dimensional (1D) features of EEG signals into many segments and employ 1D-CNNs. Moreover, these investigations are further constrained by the absence of consideration for temporal links between time series segments or spectrogram images. Therefore, we propose a Dual-Modal Information Bottleneck (Dual-modal IB) network for EEG seizure detection. The network extracts EEG features from both time series and spectrogram dimensions, allowing information from different modalities to pass through the Dual-modal IB, requiring the model to gather and condense the most pertinent information in each modality and only share what is necessary. Specifically, we make full use of the information shared between the two modality representations to obtain key information for seizure detection and to remove irrelevant feature between the two modalities. In addition, to explore the intrinsic temporal dependencies, we further introduce a bidirectional long-short-term memory (BiLSTM) for Dual-modal IB model, which is used to model the temporal relationships between the information after each modality is extracted by convolutional neural network (CNN). For CHB-MIT dataset, the proposed framework can achieve an average segment-based sensitivity of 97.42%, specificity of 99.32%, accuracy of 98.29%, and an average event-based sensitivity of 96.02%, false detection rate (FDR) of 0.70/h. We release our code at https:/[Formula: see text]/github.com/LLLL1021/Dual-modal-IB.
BackgroundRecently, deep convolutional neural networks (CNNs) have been widely adopted for ultrasound sequence tracking and shown to perform satisfactorily. However, existing trackers ignore the rich temporal contexts that exists between consecutive frames, making it difficult for these trackers to perceive information about the motion of the target.PurposeIn this paper, we propose a sophisticated method to fully utilize temporal contexts for ultrasound sequences tracking with information bottleneck. This method determines the temporal contexts between consecutive frames to perform both feature extraction and similarity graph refinement, and information bottleneck is integrated into the feature refinement process.MethodsThe proposed tracker combined three models. First, online temporal adaptive convolutional neural network (TAdaCNN) is proposed to focus on feature extraction and enhance spatial features using temporal information. Second, information bottleneck (IB) is incorporated to achieve more accurate target tracking by maximally limiting the amount of information in the network and discarding irrelevant information. Finally, we propose temporal adaptive transformer (TA‐Trans) that efficiently encodes temporal knowledge by decoding it for similarity graph refinement. The tracker was trained on 2015 MICCAI Challenge on Liver Ultrasound Tracking (CLUST) dataset to evaluate the performance of the proposed method by calculating the tracking error (TE) between the predicted landmarks and the ground truth landmarks for each frame. The experimental results are compared with 13 state‐of‐the‐art methods, and ablation studies are conducted.ResultsOn CLUST 2015 dataset, our proposed model achieves a mean TE of 0.81 ± 0.74 mm and a maximum TE of 1.93 mm for 85 point‐landmarks across 39 ultrasound sequences in the 2D sequences. Tracking speed ranged from 41 to 63 frames per second (fps).ConclusionsThis study demonstrates a new integrated workflow for ultrasound sequences motion tracking. The results show that the model has excellent accuracy and robustness. Reliable and accurate motion estimation is provided for applications requiring real‐time motion estimation in the context of ultrasound‐guided radiation therapy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.