Forkhead box P3 (FOXP3) is a specific marker of regulatory T cells (Tregs) that is also expressed in tumour cells. Previous studies have revealed that FOXP3 can promote metastasis in several types of cancer, including non-small cell lung cancer (NSCLC); however, the underlying mechanism of FOXP3 remains unclear. The aim of the present study was to investigate the effect of FOXP3 on vascular endothelial growth factor (VEGF), epithelial-to-mesenchymal transition (EMT) and the Notch1/Hes1 pathway in NSCLC. After FOXP3 small interfering RNA (siRNAs) were transfected into A549 cells, the expression of FOXP3 mRNA and protein was determined by reverse transcription-quantitative PCR and western blotting. Cell migration and invasion were analyzed by Transwell assays. The concentrations of matrix metalloproteinase (MMP)-2, MMP-9 and VEGF in the cell supernatant were evaluated by ELISA. The expression of relevant proteins involved in EMT and Notch1/Hes1 pathway were assessed via western blotting. Additionally, the expression of FOXP3, CD31 and E-cadherin was detected by immunohistochemical (IHC) staining of 55 human NSCLC tissue samples. The results demonstrated that FOXP3 knockdown significantly inhibited the cell migratory and invasive abilities, decreased the concentrations of MMP-2, MMP-9 and VEGF, downregulated the protein expression of vimentin, N-cadherin, Notch1 and Hes family BHLH transcription factor 1 (Hes1), and upregulated the protein expression of E-cadherin. Furthermore, FOXP3 expression was positively associated with CD31 + vascular endothelial cells and negatively correlated with E-cadherin in NSCLC tissues. In addition, the Notch1/Hes1 pathway inhibitor DAPT significantly downregulated the expression of FOXP3 in a dose-dependent manner. Taken together, these findings demonstrated that FOXP3 may facilitate the invasive and migratory abilities of NSCLC cells via regulating the angiogenic factor VEGF, the EMT and the Notch1/Hes1 pathway.
Named entity recognition (NER) is an important task in the processing of natural language, which needs to determine entity boundaries and classify them into pre-defined categories. For low-resource languages, most state-of-the-art systems require tens of thousands of annotated sentences to obtain high performance. However, there is minimal annotated data available about Uyghur and Hungarian (UH languages) NER tasks. There are also specificities in each task—differences in words and word order across languages make it a challenging problem. In this paper, we present an effective solution to providing a meaningful and easy-to-use feature extractor for named entity recognition tasks: fine-tuning the pre-trained language model. Therefore, we propose a fine-tuning method for a low-resource language model, which constructs a fine-tuning dataset through data augmentation; then the dataset of a high-resource language is added; and finally the cross-language pre-trained model is fine-tuned on this dataset. In addition, we propose an attention-based fine-tuning strategy that uses symmetry to better select relevant semantic and syntactic information from pre-trained language models and apply these symmetry features to name entity recognition tasks. We evaluated our approach on Uyghur and Hungarian datasets, which showed wonderful performance compared to some strong baselines. We close with an overview of the available resources for named entity recognition and some of the open research questions.
In recent years, more and more attention has been paid to text sentiment analysis, which has gradually become a research hotspot in information extraction, data mining, Natural Language Processing (NLP), and other fields. With the gradual popularization of the Internet, sentiment analysis of Uyghur texts has great research and application value in online public opinion. For low-resource languages, most state-of-the-art systems require tens of thousands of annotated sentences to get high performance. However, there is minimal annotated data available about Uyghur sentiment analysis tasks. There are also specificities in each task—differences in words and word order across languages make it a challenging problem. In this paper, we present an effective solution to providing a meaningful and easy-to-use feature extractor for sentiment analysis tasks: using the pre-trained language model with BiLSTM layer. Firstly, data augmentation is carried out by AEDA (An Easier Data Augmentation), and the augmented dataset is constructed to improve the performance of text classification tasks. Then, a pretraining model LaBSE is used to encode the input data. Then, BiLSTM is used to learn more context information. Finally, the validity of the model is verified via two categories datasets for sentiment analysis and five categories datasets for emotion analysis. We evaluated our approach on two datasets, which showed wonderful performance compared to some strong baselines. We close with an overview of the resources for sentiment analysis tasks and some of the open research questions. Therefore, we propose a combined deep learning and cross-language pretraining model for two low resource expectations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.