Designing hardware accelerators for deep neural networks (DNNs) has been much desired. Nonetheless, most of these existing accelerators are built for either convolutional neural networks (CNNs) or recurrent neural networks (RNNs). Recently, the Transformer model is replacing the RNN in the natural language processing (NLP) area. However, because of intensive matrix computations and complicated data flow being involved, the hardware design for the Transformer model has never been reported. In this paper, we propose the first hardware accelerator for two key components, i.e., the multi-head attention (MHA) ResBlock and the position-wise feed-forward network (FFN) ResBlock, which are the two most complex layers in the Transformer. Firstly, an efficient method is introduced to partition the huge matrices in the Transformer, allowing the two ResBlocks to share most of the hardware resources. Secondly, the computation flow is well designed to ensure the high hardware utilization of the systolic array, which is the biggest module in our design. Thirdly, complicated nonlinear functions are highly optimized to further reduce the hardware complexity and also the latency of the entire system. Our design is coded using hardware description language (HDL) and evaluated on a Xilinx FPGA. Compared with the implementation on GPU with the same setting, the proposed design demonstrates a speed-up of 14.6× in the MHA ResBlock, and 3.4× in the FFN ResBlock, respectively. Therefore, this work lays a good foundation for building efficient hardware accelerators for multiple Transformer networks.
Neural network-based model for text-to-speech (TTS) synthesis has made significant progress in recent years. In this paper, we present a cross-lingual, multi-speaker neural end-to-end TTS framework which can model speaker characteristics and synthesize speech in different languages. We implement the model by introducing a separately trained neural speaker embedding network, which can represent the latent structure of different speakers and language pronunciations. We train the speech synthesis network bilingually and prove the possibility of synthesizing Chinese speaker's English speech and vice versa. We explore different methods to fit a new speaker using only a few speech samples. The experimental results show that, with only several minutes of audio from a new speaker, the proposed model can synthesize speech bilingually and acquire decent naturalness and similarity for both languages.
The objective of this study was to explore the neuroprotective effect of moxibustion on rats with Parkinson’s disease (PD) and its mechanism. A Parkinson’s disease model was established in rats using a two-point stereotactic 6-hydroxydopamine injection in the right substantia nigra (SN) and ventral tegmental area. The rats received moxibustion at the Baihui (GV20) and Sishencong (EX-HN1) acupoints for 20 minutes, six times a week, for 6 weeks. The right SN tissue was histologically and immunohistochemically examined. Differentially expressed genes (DEGs) were identified through RNA sequencing. In addition, the levels of tyrosine hydroxylase (TH), glutathione peroxidase 4 (GPX4), and ferritin heavy chain 1 (FTH1) in SN were measured. In comparison to the model group, the moxibustion group showed a significantly greater TH immunoreactivity and a higher behavioural score. In particular, moxibustion led to an increase in the number and morphological stability of SN neural cells. The functional pathway analysis showed that DEGs are closely related to the ferroptosis pathway. GPX4 and FTH1 in the SN were significantly overexpressed in the moxibustion-treated rats with PD. Moxibustion can effectively reduce the death of SN neurons, decrease the occurrence of ferroptosis, and increase the TH activity to protect the neurons in rats with PD. The protective mechanism may be associated with suppression of the ferroptosis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.