ive Sentence Summarization (AS-SUM) targets at grasping the core idea of the source sentence and presenting it as the summary. It is extensively studied using statistical models or neural models based on the large-scale monolingual source-summary parallel corpus. But there is no cross-lingual parallel corpus, whose source sentence language is different to the summary language, to directly train a cross-lingual ASSUM system. We propose to solve this zero-shot problem by using resource-rich monolingual AS-SUM system to teach zero-shot cross-lingual ASSUM system on both summary word generation and attention. This teaching process is along with a back-translation process which simulates source-summary pairs. Experiments on cross-lingual ASSUM task show that our proposed method is significantly better than pipeline baselines and previous works, and greatly enhances the cross-lingual performances closer to the monolingual performances. We release the code and data at https://github.com/KelleyYin/ Cross-lingual-Summarization.
Efforts to impart responsiveness to environmental stimuli in artificial hydrogel fibers are crucial to intelligent, shape-memory electronics and weavable soft robots. However, owing to the vulnerable mechanical property, poor processability, and the dearth of scalable assembly protocols, such functional hydrogel fibers are still far from practical usage. Herein, we demonstrate an approach toward the continuous fabrication of an electro-responsive hydrogel fiber by using the self-lubricated spinning (SLS) strategy. The polyelectrolyte inside the hydrogel fiber endows it with a fast electro-response property. After solvent exchange with triethylene glycol (TEG), the maximum tensile strength of the hydrogel fiber increases from 114 kPa to 5.6 MPa, far superior to those hydrogel fiber-based actuators reported previously. Consequently, the flexible and mechanical stable hydrogel fiber is knitted into various complex geometries on demand such as a crochet flower, triple knot, thread tube, pentagram, and hollow cage. Additionally, the electrochemical-responsive ionic hydrogel fiber is capable of acting as soft robots underwater to mimic biological motions, such as Mobula-like flapping, jellyfish-mimicking grabbing, sea worm-mimicking multi-degree of freedom movements, and human finger-like smart gesturing. This work not only demonstrates an example for the large-scale production of previous infeasible hydrogel fibers, but also provides a solution for the rational design and fabrication of hydrogel woven intelligent devices.
Metal–organic frameworks (MOFs) with long‐term stability and reversible high water uptake properties can be ideal candidates for water harvesting and indoor humidity control. Now, a mesoporous and highly stable MOF, BIT‐66 is presented that has indoor humidity control capability and a photocatalytic bacteriostatic effect. BIT‐66 (V3(O)3(H2O)(BTB)2), possesses prominent moisture tunability in the range of 45–60 % RH and a water uptake and working capacity of 71 and 55 wt %, respectively, showing good recyclability and excellent performance in water adsorption–desorption cycles. Importantly, this MOF demonstrates a unique photocatalytic bacteriostatic behavior under visible light, which can effectively ameliorate the bacteria and/or mold breeding problem in water adsorbing materials.
Transfer learning between different language pairs has shown its effectiveness for Neural Machine Translation (NMT) in low-resource scenario. However, existing transfer methods involving a common target language are far from success in the extreme scenario of zero-shot translation, due to the language space mismatch problem between transferor (the parent model) and transferee (the child model) on the source side. To address this challenge, we propose an effective transfer learning approach based on cross-lingual pre-training. Our key idea is to make all source languages share the same feature space and thus enable a smooth transition for zero-shot translation. To this end, we introduce one monolingual pre-training method and two bilingual pre-training methods to obtain a universal encoder for different languages. Once the universal encoder is constructed, the parent model built on such encoder is trained with large-scale annotated data and then directly applied in zero-shot translation scenario. Experiments on two public datasets show that our approach significantly outperforms strong pivot-based baseline and various multilingual NMT approaches.
We investigate the task of constraining NMT with pre-specified translations, which has practical significance for a number of research and industrial applications. Existing works impose pre-specified translations as lexical constraints during decoding, which are based on word alignments derived from target-to-source attention weights. However, multiple recent studies have found that word alignment derived from generic attention heads in the Transformer is unreliable. We address this problem by introducing a dedicated head in the multi-head Transformer architecture to capture external supervision signals. Results on five language pairs show that our method is highly effective in constraining NMT with pre-specified translations, consistently outperforming previous methods in translation quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.