The inactivation of von Hippel–Lindau (VHL) is critical for clear cell renal cell carcinoma (ccRCC) and VHL syndrome. VHL loss leads to the stabilization of hypoxia-inducible factor α (HIFα) and other substrate proteins, which, together, drive various tumor-promoting pathways. There is inadequate molecular characterization of VHL restoration in VHL-defective ccRCC cells. The identities of HIF-independent VHL substrates remain elusive. We reinstalled VHL expression in 786-O and performed transcriptome, proteome and ubiquitome profiling to assess the molecular impact. The transcriptome and proteome analysis revealed that VHL restoration caused the downregulation of hypoxia signaling, glycolysis, E2F targets, and mTORC1 signaling, and the upregulation of fatty acid metabolism. Proteome and ubiquitome co-analysis, together with the ccRCC CPTAC data, enlisted 57 proteins that were ubiquitinated and downregulated by VHL restoration and upregulated in human ccRCC. Among them, we confirmed the reduction of TGFBI (ubiquitinated at K676) and NFKB2 (ubiquitinated at K72 and K741) by VHL re-expression in 786-O. Immunoprecipitation assay showed the physical interaction between VHL and NFKB2. K72 of NFKB2 affected NFKB2 stability in a VHL-dependent manner. Taken together, our study generates a comprehensive molecular catalog of a VHL-restored 786-O model and provides a list of putative VHL-dependent ubiquitination substrates, including TGFBI and NFKB2, for future investigation.
The success of deep neural networks (DNN) in machine perception applications such as image classification and speech recognition comes at the cost of high computation and storage complexity. Inference of uncompressed large scale DNN models can only run in the cloud with extra communication latency back and forth between cloud and end devices, while compressed DNN models achieve real-time inference on end devices at the price of lower predictive accuracy. In order to have the best of both worlds (latency and accuracy), we propose CacheNet, a model caching framework. CacheNet caches low-complexity models on end devices and high-complexity (or full) models on edge or cloud servers. By exploiting temporal locality in streaming data, high cache hit and consequently shorter latency can be achieved with no or only marginal decrease in prediction accuracy. Experiments on CIFAR-10 and FVG have shown CacheNet is 58 − 217% faster than baseline approaches that run inference tasks on end devices or edge servers alone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.