A method of noise reduction based on an adaptive threshold filtering over trainable wavelet transform was proposed. The restrictions on the trainable transform filter parameters were provided by a set of quadratic regularization terms. As an analog of the “hard” and “soft” threshold functions we used their smooth infinitely differentiable versions. The parameters of trainable wavelet transform, and threshold values were estimated by backpropagation and gradient-based optimization algorithm with an adaptive momentum estimation. Results of the proposed method were compared with an approach based on fixed discrete wavelet transform and non-adaptive global and level-depended threshold algorithms on the model problem. We used signal-to-noise ratio between suppressed and clean signals to numerically estimate the efficiency of noise reduction. We showed that the best results were obtained when the proposed trainable method with Daubechies 4 wavelet filters fine-tuning and adaptive level-dependent thresholding were applied.
The development of new technologies for voice communication has led to the need of improvement of speech enhancement methods. Modern users of information systems place high demands on both the intelligibility of the voice signal and its perceptual quality. In this work we propose a new approach to solving the problem of speech enhancement. For this, a modified pyramidal transformer neural network with an encoder-decoder structure was developed. The encoder compressed the spectrum of the voice signal into a pyramidal series of internal embeddings. The decoder with self-attention transformations reconstructed the mask of the complex ratio of the cleaned and noisy signals based on the embeddings calculated by the encoder. Two possible loss functions were considered for training the proposed neural network model. It was shown that the use of frequency encoding mixed with the input data has improved the performance of the proposed approach. The neural network was trained and tested on the DNS Challenge 2021 dataset. It showed high performance compared to modern speech enhancement methods. We provide a qualitative analysis of the training process of the implemented neural network. It showed that the network gradually moved from simple noise masking in the early training epochs to restoring the missing formant components of the speaker's voice in later epochs. This led to high performance metrics and subjective quality of enhanced speech.
A method of learning optimal orthonormal filters for feature extraction from 1-D signal based on learning wavelet-like transform was proposed. Filters had been learned by using backpropagation simultaneously with neural network, which was used as a classifier. Orthonormality of filters during the learning process was provided by several quadratic regularization terms that follow from the orthogonality of the scaling functions. The proposed method was evaluated on the environmental sound classification task. We used the trainable wavelet-like transform and wavelet transform with different bases as feature extraction methods with fixed architecture of the neural network. The proposed method obtained the best results. The spectrum characteristics of learned filters of wavelet-like transform were compared with the corresponding characteristics of reverse biorthogonal wavelet basis rbior1.5 that obtained the closest accuracy results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.