Abstract-Error control coding can be used over free-space optical (FSO) links to mitigate turbulence-induced fading. In this paper, we derive error performance bounds for coded FSO communication systems operating over atmospheric turbulence channels, considering the recently introduced gamma-gamma turbulence model. We derive a pairwise error probability (PEP) expression and then apply the transfer function technique in conjunction with the derived PEP to obtain upper bounds on the bit error rate. Simulation results are further demonstrated to confirm the analytical results.Index Terms-Atmospheric turbulence channel, free-space optical communication, pairwise error probability, error performance analysis.
Deep neural networks are gaining increasing popularity for the classic text classification task, due to their strong expressive power and less requirement for feature engineering. Despite such attractiveness, neural text classification models suffer from the lack of training data in many real-world applications. Although many semisupervised and weakly-supervised text classification models exist, they cannot be easily applied to deep neural models and meanwhile support limited supervision types. In this paper, we propose a weakly-supervised method that addresses the lack of training data in neural text classification. Our method consists of two modules:(1) a pseudo-document generator that leverages seed information to generate pseudo-labeled documents for model pre-training, and (2) a self-training module that bootstraps on real unlabeled data for model refinement. Our method has the flexibility to handle different types of weak supervision and can be easily integrated into existing deep neural models for text classification. We have performed extensive experiments on three real-world datasets from different domains. The results demonstrate that our proposed method achieves inspiring performance without requiring excessive training data and outperforms baseline methods significantly 1 .
Current text classification methods typically require a good number of human-labeled documents as training data, which can be costly and difficult to obtain in real applications. Humans can perform classification without seeing any labeled examples but only based on a small set of words describing the categories to be classified. In this paper, we explore the potential of only using the label name of each class to train classification models on unlabeled data, without using any labeled documents. We use pre-trained neural language models both as general linguistic knowledge sources for category understanding and as representation learning models for document classification. Our method (1) associates semantically related words with the label names, (2) finds category-indicative words and trains the model to predict their implied categories, and (3) generalizes the model via self-training. We show that our model achieves around 90% accuracy on four benchmark datasets including topic and sentiment classification without using any labeled documents but learning from unlabeled data supervised by at most 3 words (1 in most cases) per class as the label name 1 .
The first postnatal years are an exceptionally dynamic and critical period of structural, functional and connectivity development of the human brain. The increasing availability of non-invasive infant brain MR images provides unprecedented opportunities for accurate and reliable charting of dynamic early brain developmental trajectories in understanding normative and aberrant growth. However, infant brain MR images typically exhibit reduced tissue contrast (especially around 6 months of age), large within-tissue intensity variations, and regionally-heterogeneous, dynamic changes, in comparison with adult brain MR images. Consequently, the existing computational tools developed typically for adult brains are not suitable for infant brain MR image processing. To address these challenges, many infant-tailored computational methods have been proposed for computational neuroanatomy of infant brains. In this review paper, we provide a comprehensive review of the state-of-the-art computational methods for infant brain MRI processing and analysis, which have advanced our understanding of early postnatal brain development. We also summarize publically available infant-dedicated resources, including MRI datasets, computational tools, grand challenges, and brain atlases. Finally, we discuss the limitations in current research and suggest potential future research directions.
Audio-visual multi-modal modeling has been demonstrated to be effective in many speech related tasks, such as speech recognition and speech enhancement. This paper introduces a new time-domain audio-visual architecture for target speaker extraction from monaural mixtures. The architecture generalizes the previous TasNet (time-domain speech separation network) to enable multi-modal learning and at meanwhile it extends the classical audio-visual speech separation from frequency-domain to time-domain. The main components of proposed architecture include an audio encoder, a video encoder that extracts lip embedding from video streams, a multi-modal separation network and an audio decoder. Experiments on simulated mixtures based on recently released LRS2 dataset show that our method can bring 3dB+ and 4dB+ Si-SNR improvements on two-and three-speaker cases respectively, compared to audio-only TasNet and frequencydomain audio-visual networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.