In the paper, we present a software pipeline for speech recognition to automate the creation of training datasets, based on desired unlabeled audios, for low resource languages and domain-specific area. Considering the commoditizing of speech recognition, more teams build domain-specific models as well as models for local languages. At the same time, lack of training datasets for low to middle resource languages significantly decreases possibilities to exploit last achievements and frameworks in the Speech Recognition area and limits the wide range of software engineers to work on speech recognition problems. This problem is even more critical for domain-specific datasets. The pipeline was tested for building Ukrainian language recognition and confirmed that the created design is adaptable to different data source formats and expandable to integrate with existing frameworks.
Structure optimization of the multi-channel on-board radar with antenna aperture synthesis and algorithm for power line selection on the background of the earth surface
ONLINE SESSION. INFOCOMMUNICATION TECHNOLOGIES AND NETWORKS
A dramatic change in the abilities of language models to provide state of the art accuracy in a number of Natural Language Processing tasks is currently observed. These improvements open a lot of possibilities in solving NLP downstream tasks. Such tasks include machine translation, speech recognition, information retrieval, sentiment analysis, summarization, question answering, multilingual dialogue systems development and many more. Language models are one of the most important components in solving each of the mentioned tasks. This paper is devoted to research and analysis of the most adopted techniques and designs for building and training language models that show a state of the art results. Techniques and components applied in creation of language models and its parts are observed in this paper, paying attention to neural networks, embedding mechanisms, bidirectionality, encoder and decoder architecture, attention and self-attention, as well as parallelization through using Transformer. Results: the most promising techniques imply pretraining and fine-tuning of a language model, attention-based neural network as a part of model design, and a complex ensemble of multidimensional embeddings to build deep context understanding. The latest offered architectures based on these approaches require a lot of computational power for training language model and it is a direction of further improvement.
Current trends in NLP emphasize universal models and learning from pre-trained models. This article explores these trends and advanced models of pre-service learning. Inputs are converted into words or contextual embeddings that serve as inputs to encoders and decoders. The corpus of the author's publications over the past six years is used as the object of the research. The main methods of research are the analysis of scientific literature, prototyping, and experimental use of systems in the direction of research. Speech recognition players are divided into players with huge computing resources for whom training on large unlabeled data is a common procedure and players who are focused on training small local speech recognition models on pre-labeled audio data due to a lack of resources. Approaches and frameworks for working with unlabeled data and limited computing resources are almost not present, and methods based on iterative training are not developed and require scientific efforts for development. The research aims to develop methods of iterative training on unlabeled audio data to obtain productively ready speech recognition models with greater accuracy and limited resources. A separate block proposes methods of data preparation for use in training speech recognition systems and a pipeline for automatic training of speech recognition systems using pseudo marking of audio data. The prototype and solution of a real business problem of emotion detection demonstrate the capabilities and limitations of owl recognition systems and emotional states. With the use of the proposed methods of pseudo-labeling, it is possible to obtain recognition accuracy close to the market leaders without significant investment in computing resources, and for languages with a small amount of open data, it can even be surpassed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.