Epilepsy is one of the most common brain disorders and may result in brain dysfunction and cognitive disturbances. Epileptic seizures usually begin in childhood without being accommodated by brain damage and are tolerated by drugs that produce no brain dysfunction. In this study, cognitive function is evaluated in children with mild epileptic seizures controlled with common antiepileptic drugs. Under this prism, we propose a concise technical framework of combining and validating both linear and nonlinear methods to efficiently evaluate (in terms of synchronization) neurophysiological activity during a visual cognitive task consisting of fractal pattern observation. We investigate six measures of quantifying synchronous oscillatory activity based on different underlying assumptions. These measures include the coherence computed with the traditional formula and an alternative evaluation of it that relies on autoregressive models, an information theoretic measure known as minimum description length, a robust phase coupling measure known as phase-locking value, a reliable way of assessing generalized synchronization in state-space and an unbiased alternative called synchronization likelihood. Assessment is performed in three stages; initially, the nonlinear methods are validated on coupled nonlinear oscillators under increasing noise interference; second, surrogate data testing is performed to assess the possible nonlinear channel interdependencies of the acquired EEGs by comparing the synchronization indexes under the null hypothesis of stationary, linear dynamics; and finally, synchronization on the actual data is measured. The results on the actual data suggest that there is a significant difference between normal controls and epileptics, mostly apparent in occipital-parietal lobes during fractal observation tests.
Recent speech technology research has seen a growing interest in using WaveNets as statistical vocoders, i.e., generating speech waveforms from acoustic features. These models have been shown to improve the generated speech quality over classical vocoders in many tasks, such as text-to-speech synthesis and voice conversion. Furthermore, conditioning WaveNets with acoustic features allows sharing the waveform generator model across multiple speakers without additional speaker codes. However, multi-speaker WaveNet models require large amounts of training data and computation to cover the entire acoustic space. This paper proposes leveraging the source-filter model of speech production to more effectively train a speakerindependent waveform generator with limited resources. We present a multi-speaker 'GlotNet' vocoder, which utilizes a WaveNet to generate glottal excitation waveforms, which are then used to excite the corresponding vocal tract filter to produce speech. Listening tests show that the proposed model performs favourably to a direct WaveNet vocoder trained with the same model architecture and data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.