PurposeTo investigate the effects of Helicobacter pylori (H. pylori)-CagA and the urease metabolite on mucin expression in AGS cells.Materials and MethodsAGS cells were transfected with CagA and/or treated with different concentrations of NH4CL. Mucin gene and protein expression was assessed by qPCR and immunofluorescence assays, respectively.ResultsCagA significantly upregulated MUC5AC, MUC2, and MUC5B expression in AGS cells, but did not affect E-cadherin and MUC6 expression. MUC5AC, MUC6, and MUC2 expression in AGS cells increased with increasing concentrations until reaching a peak level at 15 mM. MUC5B mRNA expression in AGS cells ( concentration of 15 mM) was significantly higher than that at 0, 5, and 10 mM . No changes in E-cadherin expression in AGS cells treated with were noted, except at 20 mM. The expression of MUC5AC, MUC2, and MUC6 mRNA in CagA-transfected AGS cells at an concentration of 15 mM was significantly higher than that at 0 mM, and decreased at higher concentrations. The expression of MUC5B mRNA increased with increases in concentration, and was significantly higher compared to that in untreated cells. No significant change in the expression of E-cadherin mRNA in CagA-transfected AGS cells was observed. Immunofluorescence assays confirmed the observed changes.ConclusionH. pylori may affect the expression of MUC5AC, MUC2, MUC5B, and MUC6 in AGS cells via CagA and/or , but not E-cadherin.
Market opportunities for machine learning have attracted a wide range of both established companies and startups to develop and tapeout their own accelerators. Many of these new products have unprecedented scale, with multiple reticle-sized chips in leading process nodes and custom interconnect forming enormous computing systems, and even accelerators for embedded devices are performing trillions of operations per second. This talk will discuss some of the main challenges facing large general-purpose accelerators, including multi-die scaling, model storage options, exploiting sparsity, achieving strong scaling, improving utilization, and choosing benchmarks. Furthermore, research into solving these problems is extremely challenging, as most research projects are still confined to tiny multiproject-wafer (MPW) test chips in older technology nodes due to practical constraints such as cost and complexity. This talk will conclude by discussing strategies for small teams to continue to conduct relevant research as the complexity gap between research and product continues to widen.Brian Zimmer is a Senior Research Scientist with the Circuits Research Group at NVIDIA in Santa Clara, CA. He received the M.S. and Ph.D. degrees in electrical engineering and computer sciences from the University of California at Berkeley, in 2012 and 2015, respectively. His research interests include soft-error resilience, energy-efficient digital design, low-voltage static random-access memory (SRAM) design, machine learning accelerators, productive design methodologies, and variation tolerance.The forum provides a comprehensive full-stack (hardware and software) view of ML acceleration from cloud to edge. The first talk focuses on the main design and benchmarking challenges facing large general-purpose accelerators, including multi-die scaling, and describes strategies for conducting relevant research as the complexity gap between research prototype and product continues to widen. The second talk looks at how to leverage and specialize the open-source RISC-V ISA for edge ML, exploring the trade-offs between different forms of acceleration such as lightweight ISA extensions and tightly-coupled memory accelerators. The third talk details an approach based on a practical unified architecture for ML that can be easily "tailored" to fit in different scenarios ranging from smart watches, smartphones, autonomous cars to intelligent cloud. The fourth talk explores the co-design of hardware and DNN models to achieve stateof-the-art performance for real-time, extremely energy/throughput-constrained inference applications. The fifth talk deals with ML on reconfigurable logic, discussing many examples of forms of specializations implemented on FPGAs and their impact on potential applications, flexibility, performance and efficiency. The sixth talk describes the software complexities for enabling ML APIs for various different types of specialized hardware accelerators (GPU, TPUs, including EdgeTPU). The seventh talk look into how to optimize the train...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.