In this work we address disentanglement of style and content in speech signals. We propose a fully convolutional variational autoencoder employing two encoders: a content encoder and a style encoder. To foster disentanglement, we propose adversarial contrastive predictive coding. This new disentanglement method does neither need parallel data nor any supervision. We show that the proposed technique is capable of separating speaker and content traits into the two different representations and show competitive speakercontent disentanglement performance compared to other unsupervised approaches. We further demonstrate an increased robustness of the content representation against a train-test mismatch compared to spectral features, when used for phone recognition.
In this work we tackle disentanglement of speaker and content related variations in speech signals. We propose a fully convolutional variational autoencoder employing two encoders: a content encoder and a speaker encoder. To foster disentanglement we propose adversarial contrastive predictive coding. This new disentanglement method does neither need parallel data nor any supervision, not even speaker labels. With successful disentanglement the model is able to perform voice conversion by recombining content and speaker attributes. Due to the speaker encoder which learns to extract speaker traits from an audio signal, the proposed model not only provides meaningful speaker embeddings but is also able to perform zero-shot voice conversion, i.e. with previously unseen source and target speakers. Compared to state-of-the-art disentanglement approaches we show competitive disentanglement and voice conversion performance for speakers seen during training and superior performance for unseen speakers.
Since diarization and source separation of meeting data are closely related tasks, we here propose an approach to perform the two objectives jointly. It builds upon the targetspeaker voice activity detection (TS-VAD) diarization approach, which assumes that initial speaker embeddings are available. We replace the final combined speaker activity estimation network of TS-VAD with a network that produces speaker activity estimates at a time-frequency resolution. Those act as masks for source extraction, either via masking or via beamforming. The technique can be applied both for single-channel and multi-channel input and, in both cases, achieves a new state-of-the-art word error rate (WER) on the LibriCSS meeting data recognition task. We further compute speaker-aware and speaker-agnostic WERs to isolate the contribution of diarization errors to the overall WER performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.