Virion infectivity factor (Vif) is an accessory protein encoded by HIV-1 and is critical for viral infection of the host CD4 ؉ T cell population. Vif induces ubiquitination and subsequent degradation of Apo3G, a cytosolic cytidine deaminase that otherwise targets the retroviral genome. Interaction of Vif with the cellular Cullin5-based E3 ubiquitin ligase requires a conserved BC box and upstream residues that are part of the conserved H-(Xaa)5-C-(Xaa)17-18-C-(Xaa)3-5-H (HCCH) motif. The HCCH motif is involved in stabilizing the Vif-Cullin 5 interaction, but the exact role of the conserved His and Cys residues remains elusive. In this report, we find that full-length HIV-1 Vif, as well as a HCCH peptide, is capable of binding to zinc with high specificity. Zinc binding induces a conformational change that leads to the formation of large protein aggregates. EDTA reversed aggregation and regenerated the apoprotein conformation. Cysteine modification studies with the HCCH peptide suggest that C114 is critical for stabilizing the fold of the apopeptide, and that C133 is located in a solvent-exposed region with no definite secondary structure. Selective alkylation of C133 reduced metal-binding specificity of the HCCH peptide, allowing cobalt to bind with rates comparable to that with zinc. This study demonstrates that the HCCH motif of HIV-1 Vif is a unique metal-binding domain capable of mediating protein-protein interactions in the presence of zinc and adds to a growing list of examples in which metal ion binding induces protein misfolding and/or aggregation.aggregation ͉ cullin ubiquitin ligase ͉ metal-binding protein
have demanded increasingly powerful computer systems. High-performance computing (HPC) has steadily pushed supercomputers to greater computational capabilities, with petascale computing (10 15 floating-point operations per second [flops]) first being achieved in 2008. 1 The current fastest supercomputer is capable of 33.86 petaflops (see www.top500 .org). These HPC systems perform massive computations to drive important scientific experiments leading to impactful discoveries, from designing more efficient fuels and engines to engineering safer bridges and buildings, from modeling global climate phenomena to exploring the origins of the universe.The next HPC milestone is that of developing an exascale supercomputer that can achieve more than an exaflop (10 18 or one billion, billion flops) on critical scientific computing applications. Such exascale supercomputers are envisioned to comprise at least 100,000 interconnected servers or nodes, implying that each node has an individual computing capability of greater than 10 teraflops (Tflops) on real applications. Note that a modern high-end discrete GPU today achieves a peak of only about three doubleprecision teraflops.The challenges associated with exascale computing, however, extend far beyond merely achieving a certain number of floatingpoint calculations per second. To feed such high levels of computation, both the system's memory and internode communication bandwidths must increase dramatically beyond current levels. The energy efficiency of the supercomputer must improve by orders of magnitude to enable the operation within practical datacenter power-delivery capabilities of a few tens of megawatts. With more than 100,000 nodes, significant advances in resilience and reliability are also required to keep the overall machine up and running. In this article, we describe AMD Research's vision for exascale computing, and in particular, how we see heterogeneous computing as the path forward. AMD's vision for exascale computingAn exascale system consists of the hardware implementing the computational resources, as well as the software needed to efficiently write, tune, and execute applications on the hardware.
This paper examines energy management in a heterogeneous processor consisting of an integrated CPU-GPU for highperformance computing (HPC) applications. Energy management for HPC applications is challenged by their uncompromising performance requirements and complicated by the need for coordinating energy management across distinct core types -a new and less understood problem.We examine the intra-node CPU-GPU frequency sensitivity of HPC applications on tightly coupled CPU-GPU architectures as the first step in understanding power and performance optimization for a heterogeneous multi-node HPC system. The insights from this analysis form the basis of a coordinated energy management scheme, called DynaCo, for integrated CPU-GPU architectures. We implement DynaCo on a modern heterogeneous processor and compare its performance to a state-of-the-art power-and performance-management algorithm. DynaCo improves measured average energy-delay squared (ED^2) product by up to 30% with less than 2% average performance loss across several exascale and other HPC workloads.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.