The number of people diagnosed with epilepsy as a common brain disease accounts for about 1% of the world’s total population. Seizure prediction is an important study that can improve the lives of patients with epilepsy, and, in recent years, it has attracted more and more attention. In this paper, we propose a novel hybrid deep learning model that combines a Dense Convolutional Network (DenseNet) and Long Short-Term Memory (LSTM) for epileptic seizure prediction using EEG data. The proposed method first converts the EEG data into the time-frequency domain through Discrete Wavelet Transform (DWT) for use in the input of the model. Then, we train the previously transformed image through a hybrid model combining Densenet and LSTM. To evaluate the performance of the proposed method, experiments are conducted for each preictal length of 5, 10, and 15 min using the CHB-MIT scalp EEG dataset. As a result, we obtained a prediction accuracy of 93.28%, a sensitivity of 92.92%, a specificity of 93.65%, a false positive rate of 0.063 per hour, and an F1-score of 0.923 when the preictal length was 5 min. Finally, as the proposed method is compared to previous studies, it is confirmed that the seizure prediction performance was improved significantly.
We propose a frame loss concealment technique for decoders compatible with MPEG advanced audio coding (AAC). The spectral information of the lost frame is estimated in the modified discrete cosine transform (MDCT) domain via efficient techniques that are tailored to individual source signal components: In noise-like spectral bins the MDCT coefficients are obtained by shaped-noise insertion, while coefficients in tone-dominant bins are estimated by frame interpolation followed by a refinement procedure so as to optimize the fit of the concealed frames with neighboring frames. Experimental results demonstrate that the proposed technique offers performance superior to techniques adopted in commercial AAC decoders.
This paper proposes an efficient method for frame loss concealment within the advanced audio coding (AAC) decoder, which can effectively mitigate the adverse impact of transmission errors on the reconstruction quality. The lost frame information is estimated in the modified discrete cosine transform (MDCT) domain in terms of magnitude and sign of the coefficients. A computationally efficient approach that is capable of providing accurate estimation is employed for the magnitude estimate. Enhanced sign estimation is developed and implemented, which exploits extra information from encoder. Extensive subjective quality evaluation demonstrates that the proposed method achieves substantial quality improvement with minimal encoder assistance.
A novel approach is proposed for effective high frequency regeneration in audio coding, which is based on a sinusoids plus noise model. It assumes a standard high efficiency advanced audio coding (HE-AAC) encoder, and modifies the decoder to exploit all available information in estimating the model parameters. From the lower band reconstruction of core AAC, frequency parameters of the high band sinusoids are estimated. Side information about spectral energy and the regenerated high band of standard HE-AAC are employed in estimating the magnitude parameters of the high band sinusoids as well as noise model parameters. The gains achieved by the proposed technique, over conventional HE-AAC, are demonstrated by subjective quality tests that were carried out on audio signals with significant harmonics in the high band.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.