Dysregulation of a genomic imprinting gene can contribute to carcinogenesis. Here, delta-like 1 homolog (Drosophila) (DLK1), a paternally expressed gene, was found to be significantly up-regulated in 60 (73.2%) of a total of 82 hepatocellular carcinoma (HCC) specimens using reverse transcription-polymerase chain reaction. In addition, immunohistochemistry staining was performed in another 88 HCC specimens, of which 50 (56.8%) cancerous tissues were considered as positive. The expression of DLK1 was obviously induced in HCC cells, Bel-7402 and MHCC-H, by a demethylation agent, 5-aza-2'-deoxycytidine. Furthermore, both demethylation of the DLK1 promoter (-565 to -362) and hypermethylation of the imprinting control domain in the region upstream of maternally expressed gene 3 were identified in a few HCC specimens. This implies that deregulation of genomic DNA methylation of the imprinted domain could be attributed to the up-regulation of DLK1 in HCC, although the undoubtedly complex mechanisms involved in the epigenetic event should be further investigated in HCC. Surprisingly, the expression of DLK1 in HCC was confirmed to be monoallelic specific, not biallelic, in three HCC specimens with a single nucleotide polymorphism as at T852C (rs2295660). Importantly, the exogenous DLK1 can significantly promote the cell proliferation of SMMC-7721 cells, a HCC cell line, whereas the suppression of endogenetic DLK1 through RNA interference can markedly inhibit cell growth, colony formation and tumorigenicity of HepG2, Hep3B and HuH-7 cells. These data suggest that DLK1 as an imprinted gene could be significantly up-regulated in HCC due to certain epigenetic events and contribute to the oncogenesis of this tumor.
Multi-modal pre-training models have been intensively explored to bridge vision and language in recent years. However, most of them explicitly model the cross-modal interaction between image-text pairs, by assuming that there exists strong semantic correlation between the text and image modalities. Since this strong assumption is often invalid in real-world scenarios, we choose to implicitly model the cross-modal correlation for large-scale multi-modal pretraining, which is the focus of the Chinese project 'Wen-Lan' led by our team. Specifically, with the weak correlation assumption over image-text pairs, we propose a twotower pre-training model called BriVL within the crossmodal contrastive learning framework. Unlike OpenAI CLIP that adopts a simple contrastive learning method, we devise a more advanced algorithm by adapting the latest method MoCo into the cross-modal scenario. By building a large queue-based dictionary, our BriVL can incorporate more negative samples in limited GPU resources. We further construct a large Chinese multi-source imagetext dataset called RUC-CAS-WenLan for pre-training our BriVL model. Extensive experiments demonstrate that the pre-trained BriVL model outperforms both UNITER and OpenAI CLIP on various downstream tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.