Lactic acid‐fermented garlic extract (LAFGE) has been shown to have hepatoprotective role in liver diseases. This study was conducted to evaluate the efficacy of a new LAFGE‐based hepatoprotective functional food product (named D‐18‐007) formulated with other additive components, including l‐arginine, l‐ornithine, and the leaf extract of licorice and artichoke. In a rat model of d‐galactosamine(GalN)/LPS‐induced liver injury, the survival was significantly higher in animals treated with D‐18‐007 than in animals treated with LAFGE. The hepatic injury was alleviated by either LAFGE or D‐18‐007, but the overall effect was more significant in D‐18‐007, as shown by the necrosis, histology, and serum analyses. Also, the decrease in GalN/LPS‐induced lipid peroxidation in the liver tissue was more significant in D‐18‐007 than LAFGE. The decrease in IL‐6 protein in the liver was similar between LAFGE and D‐18‐007. Moreover, we compared the amount of the bile in normal animals and found that D‐18‐007 has better choleretic activity than LAFGE. Using this acute liver injury model, our results suggest that D‐18‐007 has an enhanced hepatoprotective effect in acute liver injury compared with LAFGE alone.
Multi-agent reinforcement learning (MARL) is a powerful technology to construct interactive artificial intelligent systems in various applications such as multi-robot control and self-driving cars. Unlike supervised model or single-agent reinforcement learning, which actively exploits network pruning, it is obscure that how pruning will work in multi-agent reinforcement learning with its cooperative and interactive characteristics.In this paper, we present a real-time sparse training acceleration system named LearningGroup, which adopts network pruning on the training of MARL for the first time with an algorithm/architecture co-design approach. We create sparsity using a weight grouping algorithm and propose on-chip sparse data encoding loop (OSEL) that enables fast encoding with efficient implementation. Based on the OSEL's encoding format, LearningGroup performs efficient weight compression and computation workload allocation to multiple cores, where each core handles multiple sparse rows of the weight matrix simultaneously with vector processing units. As a result, Learn-ingGroup system minimizes the cycle time and memory footprint for sparse data generation up to 5.72× and 6.81×. Its FPGA accelerator shows 257.40-3629.48 GFLOPS throughput and 7.10-100.12 GFLOPS/W energy efficiency for various conditions in MARL, which are 7.13× higher and 12.43× more energy efficient than Nvidia Titan RTX GPU, thanks to the fully on-chip training and highly optimized dataflow/data format provided by FPGA. Most importantly, the accelerator shows speedup up to 12.52× for processing sparse data over the dense case, which is the highest among state-of-the-art sparse training accelerators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.