SUMMARY
The ability of a bacterial pathogen to monitor available carbon sources in host tissues provides a clear fitness advantage. In the group A streptococcus (GAS), the virulence regulator Mga contains homology to phosphotransferase system (PTS) regulatory domains (PRDs) found in sugar operon regulators. Here we show that Mga was phosphorylated in vitro by the PTS components EI/HPr at conserved PRD histidines. A ∆ptsI (EI-deficient) GAS mutant exhibited decreased Mga activity. However, PTS-mediated phosphorylation inhibited Mga-dependent transcription of emm in vitro. Using alanine (unphosphorylated) and aspartate (phosphomimetic) mutations of PRD histidines, we establish that a doubly phosphorylated PRD1 phosphomimetic (D/DMga4) is completely inactive in vivo, shutting down expression of the Mga regulon. Although D/DMga4 is still able to bind DNA in vitro, homo-multimerization of Mga is disrupted and the protein is unable to activate trancription. PTS- mediated regulation of Mga activity appears to be important for pathogenesis, as bacteria expressing either nonphosphorylated (A/A) or phosphomimetic (D/D) PRD1 Mga mutants were attenuated in a model of GAS invasive skin disease. Thus, PTS-mediated phosphorylation of Mga may allow the bacteria to modulate virulence gene expression in response to carbohydrate status. Furthermore, PRD-containing virulence regulators (PCVRs) appear to be widespread in Gram-positive pathogens.
Binarized Neural Network (BNN) removes bitwidth redundancy in classical CNN by using a single bit (-1/+1) for network parameters and intermediate representations, which has greatly reduced the off-chip data transfer and storage overhead. However, a large amount of computation redundancy still exists in BNN inference. By analyzing local properties of images and the learned BNN kernel weights, we observe an average of ∼78% input similarity and ∼59% weight similarity among weight kernels, measured by our proposed metric in common network architectures. Thus there does exist redundancy that can be exploited to further reduce the amount of on-chip computations.Motivated by the observation, in this paper, we proposed two types of fast and energy-efficient architectures for BNN inference. We also provide analysis and insights to pick the better strategy of these two for different datasets and network models. By reusing the results from previous computation, much cycles for data buffer access and computations can be skipped. By experiments, we demonstrate that 80% of the computation and 40% of the buffer access can be skipped by exploiting BNN similarity. Thus, our design can achieve 17% reduction in total power consumption, 54% reduction in on-chip power consumption and 2.4× maximum speedup, compared to the baseline without applying our reuse technique. Our design also shows 1.9× more area-efficiency compared to stateof-the-art BNN inference design. We believe our deployment of BNN on FPGA leads to a promising future of running deep learning models on mobile devices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.