Circulating tumor-derived extracellular vesicles (EVs) have emerged as a promising source for identifying cancer biomarkers for early cancer detection. However, the clinical utility of EVs has thus far been limited by the fact that most EV isolation methods are tedious, nonstandardized, and require bulky instrumentation such as ultracentrifugation (UC). Here, we report a size-based EV isolation tool called ExoTIC (exosome total isolation chip), which is simple, easy-to-use, modular, and facilitates high-yield and high-purity EV isolation from biofluids. ExoTIC achieves an EV yield ~4–1000-fold higher than that with UC, and EV-derived protein and microRNA levels are well-correlated between the two methods. Moreover, we demonstrate that ExoTIC is a modular platform that can sort a heterogeneous population of cancer cell line EVs based on size. Further, we utilize ExoTIC to isolate EVs from cancer patient clinical samples, including plasma, urine, and lavage, demonstrating the device’s broad applicability to cancers and other diseases. Finally, the ability of ExoTIC to efficiently isolate EVs from small sample volumes opens up avenues for preclinical studies in small animal tumor models and for point-of-care EV-based clinical testing from fingerprick quantities (10–100 μL) of blood.
Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability to alleviate the burden of manual feature engineering. In this paper, we propose a novel neural network model for Chinese word segmentation called Max-Margin Tensor Neural Network (MMTNN). By exploiting tag embeddings and tensorbased transformation, MMTNN has the ability to model complicated interactions between tags and context characters. Furthermore, a new tensor factorization approach is proposed to speed up the model and avoid overfitting. Experiments on the benchmark dataset show that our model achieves better performances than previous neural network models and that our model can achieve a competitive performance with minimal feature engineering. Despite Chinese word segmentation being a specific case, MMTNN can be easily generalized and applied to other sequence labeling tasks.
Health care systems primarily focus on patients after they present with disease, not before. The emerging field of precision health encourages disease prevention and earlier detection by monitoring health and disease based on an individual’s risk. Active participation in health care can be encouraged with continuous health-monitoring devices, providing a higher-resolution picture of human health and disease. However, the development of monitoring technologies must prioritize the collection of actionable data and long-term user engagement.
Most of the neural sequence-to-sequence (seq2seq) models for grammatical error correction (GEC) have two limitations: (1) a seq2seq model may not be well generalized with only limited error-corrected data; (2) a seq2seq model may fail to completely correct a sentence with multiple errors through normal seq2seq inference. We attempt to address these limitations by proposing a fluency boost learning and inference mechanism. Fluency boosting learning generates fluency-boost sentence pairs during training, enabling the error correction model to learn how to improve a sentence's fluency from more instances, while fluency boosting inference allows the model to correct a sentence incrementally through multi-round seq2seq inference until the sentence's fluency stops increasing. Experiments show our approaches improve the performance of seq2seq models for GEC, achieving state-of-the-art results on both CoNLL-2014 and JFLEG benchmark datasets.
Previous studies on lexical substitution tend to obtain substitute candidates by finding the target word's synonyms from lexical resources (e.g., WordNet) and then rank the candidates based on its contexts. These approaches have two limitations: (1) They are likely to overlook good substitute candidates that are not the synonyms of the target words in the lexical resources; (2) They fail to take into account the substitution's influence on the global context of the sentence. To address these issues, we propose an end-toend BERT-based lexical substitution approach which can propose and validate substitute candidates without using any annotated data or manually curated resources. Our approach first applies dropout to the target word's embedding for partially masking the word, allowing BERT to take balanced consideration of the target word's semantics and contexts for proposing substitute candidates, and then validates the candidates based on their substitution's influence on the global contextualized representation of the sentence. Experiments show our approach performs well in both proposing and ranking substitute candidates, achieving the state-of-the-art results in both LS07 and LS14 benchmarks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.