Stick-slip vibration presents one of the main problems in the quality of drilling performance limiting tool life and productivity. This type of vibration can be suppressed by means of many approaches, such as varying parameters and use of control tools. Although tremendous improvements have been made in overcoming this dysfunction, stick-slip vibration suppression remains a large problem in the drilling industry. This paper provides an up-to-date review of stick-slip vibration behavior in drillstrings. First, the phenomena and the modeling methods of stick-slip vibrations are reviewed. Then an overview of the approaches for stick-slip suppression in oilwell drillstrings is presented, grouping the references under the categories of passive vibration control and active vibration control. Literature related to passive control is grouped under the categories of optimization of bottom hole assembly (BHA) configurations, bit selection and bit redesign, and use of downhole tools. The contributions related to the active control approaches for stick-slip mitigation are grouped under the categories of drilling parameters optimization based on real-time measurement and use of active control systems. Finally, related discussions and recommendations are conducted.
While recent neural machine translation approaches have delivered state-of-the-art performance for resource-rich language pairs, they suffer from the data scarcity problem for resource-scarce language pairs. Although this problem can be alleviated by exploiting a pivot language to bridge the source and target languages, the source-to-pivot and pivot-to-target translation models are usually independently trained. In this work, we introduce a joint training algorithm for pivot-based neural machine translation. We propose three methods to connect the two models and enable them to interact with each other during training. Experiments on Europarl and WMT corpora show that joint training of source-to-pivot and pivot-to-target models leads to significant improvements over independent training across various languages.
Generating high-quality paraphrases is a fundamental yet challenging natural language processing task. Despite the effectiveness of previous work based on generative models, there remain problems with exposure bias in recurrent neural networks, and often a failure to generate realistic sentences. To overcome these challenges, we propose the first endto-end conditional generative architecture for generating paraphrases via adversarial training, which does not depend on extra linguistic information. Extensive experiments on four public datasets demonstrate the proposed method achieves state-of-the-art results, outperforming previous generative architectures on both automatic metrics (BLEU, METEOR, and TER) and human evaluations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.