Neural Machine Translation (NMT) based on the encoder-decoder architecture has recently become a new paradigm. Researchers have proven that the target-side monolingual data can greatly enhance the decoder model of NMT. However, the source-side monolingual data is not fully explored although it should be useful to strengthen the encoder model of NMT, especially when the parallel corpus is far from sufficient. In this paper, we propose two approaches to make full use of the sourceside monolingual data in NMT. The first approach employs the self-learning algorithm to generate the synthetic large-scale parallel data for NMT training. The second approach applies the multi-task learning framework using two NMTs to predict the translation and the reordered source-side monolingual sentences simultaneously. The extensive experiments demonstrate that the proposed methods obtain significant improvements over the strong attention-based NMT.
still susceptible to fatigue fracture during multiple-cycle mechanical loads, exhibiting fatigue threshold (i.e., the minimal fracture energy required for crack propagation under cyclic loads) below 100 J m −2 . [5][6][7] Therefore, the long-term reliability has substantially hampered the in practical utility of hydrogels and hydrogel-based devices, and remains a key challenge in these fields.On the contrary, biological tissues, such as skeletal muscles, tendon and cartilage, are well known for not only their superior strength, modulus, toughness, but also long-term robustness. [8][9][10] For example, skeletal muscles can sustain a high stress (i.e., 1 MPa) over millions cycles per year without fracture, exhibiting fatigue thresholds (i.e., the minimal fracture energy required for crack propagation under cyclic loads) over 1000 J m −2 , despite their high water content (≈80%). [8,11] Such unrivalled fatigue-resistance originates from their hierarchically-arranged collagen fibrillar micro/nanostructures. [10] Despite bioinspired construction of structural materials has been promising for the design of fatigue-resistant hydrogels, [12][13][14][15][16][17] how to produce hydrogel materials with unprecedented fatigue-resistance in a universal and viable manner still remains an open issue. More recently, fatigue-resistant hydrogels have been fabricated by engineering the crystalline domains, [12][13][14] fibril structures, [15,16] or mesoscale phase separation. [17] Ice-templated freeze-casting strategy has been utilized as a powerful technology to impart Nature builds biological materials from limited ingredients, however, with unparalleled mechanical performances compared to artificial materials, by harnessing inherent structures across multi-length-scales. In contrast, synthetic material design overwhelmingly focuses on developing new compounds, and fails to reproduce the mechanical properties of natural counterparts, such as fatigue resistance. Here, a simple yet general strategy to engineer conventional hydrogels with a more than 100-fold increase in fatigue thresholds is reported. This strategy is proven to be universally applicable to various species of hydrogel materials, including polysaccharides (i.e., alginate, cellulose), proteins (i.e., gelatin), synthetic polymers (i.e., poly(vinyl alcohol)s), as well as corresponding polymer composites. These fatigueresistant hydrogels exhibit a record-high fatigue threshold over most synthetic soft materials, making them low-cost, high-performance, and durable alternatives to soft materials used in those circumstances including robotics, artificial muscles, etc.
Cross-lingual summarization (CLS) is the task to produce a summary in one particular language for a source document in a different language. Existing methods simply divide this task into two steps: summarization and translation, leading to the problem of error propagation. To handle that, we present an end-to-end CLS framework, which we refer to as Neural Cross-Lingual Summarization (NCLS), for the first time. Moreover, we propose to further improve NCLS by incorporating two related tasks, monolingual summarization and machine translation, into the training process of CLS under multi-task learning. Due to the lack of supervised CLS data, we propose a round-trip translation strategy to acquire two high-quality large-scale CLS datasets based on existing monolingual summarization datasets. Experimental results have shown that our NCLS achieves remarkable improvement over traditional pipeline methods on both English-to-Chinese and Chinese-to-English CLS human-corrected test sets. In addition, NCLS with multi-task learning can further significantly improve the quality of generated summaries. We make our dataset and code publicly available here:Rod gray , 94 , had been taken to hospital by ambulance after he cut his head in a fall at his home … Rod gray was taken to ipswich hospital after falling over at home .Rod gray was taken to Ipswich Hospital after falling down at home. MS RTT English Article (Input for MS or CLS)English Reference (Output for MS) Chinese ReferenceRod gray was taken to ipswich hospital after falling over at home .Rod-Grau wurde nach dem Sturz zu Hause ins ipswich-Krankenhaus gebracht.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.