The p38 mitogen-activated protein kinase (MAPK) pathway has been implicated in both suppression and promotion of tumorigenesis. It remains unclear how these 2 opposite functions of p38 operate in vivo to impact cancer development. We previously reported that a p38 downstream kinase, p38-regulated/activated kinase (PRAK), suppresses tumor initiation and promotion by mediating oncogene-induced senescence in a murine skin carcinogenesis model. Here, using the same model, we show that once the tumors are formed, PRAK promotes the growth and progression of skin tumors. Further studies identify PRAK as a novel host factor essential for tumor angiogenesis. In response to tumor-secreted proangiogenic factors, PRAK is activated by p38 via a vascular endothelial growth factor receptor 2 (VEGFR2)-dependent mechanism in host endothelial cells, where it mediates cell migration toward tumors and incorporation of these cells into tumor vasculature, at least partly by regulating the phosphorylation and activation of focal adhesion kinase (FAK) and cytoskeletal reorganization. These findings have uncovered a novel signaling circuit essential for endothelial cell motility and tumor angiogenesis. Moreover, we demonstrate that the tumorsuppressing and tumor-promoting functions of the p38-PRAK pathway are temporally and spatially separated during cancer development in vivo, relying on the stimulus, and the tissue type and the stage of cancer development in which it is activated.
Many companies are deploying services largely based on machine-learning algorithms for sophisticated processing of large amounts of data, either for consumers or industry. The state-of-the-art and most popular such machine-learning algorithms are Convolutional and Deep Neural Networks (CNNs and DNNs), which are known to be computationally and memory intensive. A number of neural network accelerators have been recently proposed which can offer high computational capacity/area ratio, but which remain hampered by memory accesses. However, unlike the memory wall faced by processors on general-purpose workloads, the CNNs and DNNs memory footprint, while large, is not beyond the capability of the on-chip storage of a multi-chip system. This property, combined with the CNN/DNN algorithmic characteristics, can lead to high internal bandwidth and low external communications, which can in turn enable high-degree parallelism at a reasonable area cost. In this article, we introduce a custom multi-chip machine-learning architecture along those lines, and evaluate performance by integrating electrical and optical inter-chip interconnects separately. We show that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 656.63x over a GPU, and reduce the energy by 184.05x on average for a 64-chip system. We implement the node down to the place and route at 28nm, containing a combination of custom storage and computational units, with electrical inter-chip interconnects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.