The carbon footprint of the cold chain logistics system refers to the greenhouse gas emissions directly or indirectly caused in each link of the cold chain logistics activities. Because cold chain logistics is the main carbon emitter in the field of logistics, research on how to reduce carbon emissions in the field of cold chain logistics plays an important role in energy conservation and emission reduction. Based on the in-depth analysis of the carbon footprint of cold chain logistics, this paper introduces the distance coefficient and freshness parameters into the optimization model innovatively and uses the life cycle assessment method and input-output method to determine the calculation range of the carbon footprint of fresh products of each link in the cold chain logistics. The system calculates the carbon emissions generated by the production and operation activities of each place of origin, distribution center, retailer, and waste disposal during the circulation of fresh products. This paper establishes a carbon footprint optimization model to discuss how to balance carbon constraints and minimized costs. Through the analysis of the simulation results, from the perspective of the government and enterprises, corresponding countermeasures are put forward to more effectively achieve the goal of energy conservation and emission reduction and guide the cold chain logistics industry to sustainable development.
Chest X-ray has become one of the most common ways in diagnostic radiology exams, and this technology assists expert radiologists with finding the patients at potential risk of cardiopathy and lung diseases. However, it is still a challenge for expert radiologists to assess thousands of cases in a short period so that deep learning methods are introduced to tackle this problem. Since the diseases have correlations with each other and have hierarchical features, the traditional classification scheme could not achieve a good performance. In order to extract the correlation features among the diseases, some GCN-based models are introduced to combine the features extracted from the images to make prediction. This scheme can work well with the high quality of image features, so backbone with high computation cost plays a vital role in this scheme. However, a fast prediction in diagnostic radiology is also needed especially in case of emergency or region with low computation facilities, so we proposed an efficient convolutional neural network with GCN, which is named SGGCN, to meet the need of efficient computation and considerable accuracy. SGGCN used SGNet-101 as backbone, which is built by ShuffleGhost Block (Huang et al., 2021) to extract features with a low computation cost. In order to make sufficient usage of the information in GCN, a new GCN architecture is designed to combine information from different layers together in GCNM module so that we can utilize various hierarchical features and meanwhile make the GCN scheme faster. The experiment on CheXPert datasets illustrated that SGGCN achieves a considerable performance. Compared with GCN and ResNet-101 (He et al., 2015) backbone (test AUC 0.8080, parameters 4.7M and FLOPs 16.0B), the SGGCN achieves 0.7831 (−3.08%) test AUC with parameters 1.2M (−73.73%) and FLOPs 3.1B (−80.82%), where GCN with MobileNet (Sandler and Howard, 2018) backbone achieves 0.7531 (−6.79%) test AUC with parameters 0.5M (−88.46%) and FLOPs 0.66B (−95.88%).
Deep learning algorithms are facing the limitation in virtual reality application due to the cost of memory, computation, and real-time computation problem. Models with rigorous performance might suffer from enormous parameters and large-scale structure, and it would be hard to replant them onto embedded devices. In this paper, with the inspiration of GhostNet, we proposed an efficient structure ShuffleGhost to make use of the redundancy in feature maps to alleviate the cost of computations, as well as tackling some drawbacks of GhostNet. Since GhostNet suffers from high computation of convolution in Ghost module and shortcut, the restriction of downsampling would make it more difficult to apply Ghost module and Ghost bottleneck to other backbone. This paper proposes three new kinds of ShuffleGhost structure to tackle the drawbacks of GhostNet. The ShuffleGhost module and ShuffleGhost bottlenecks are utilized by the shuffle layer and group convolution from ShuffleNet, and they are designed to redistribute the feature maps concatenated from Ghost Feature Map and Primary Feature Map. Besides, they eliminate the gap of them and extract the features. Then, SENet layer is adopted to reduce the computation cost of group convolution, as well as evaluating the importance of the feature maps which concatenated from Ghost Feature Maps and Primary Feature Maps and giving proper weights for the feature maps. This paper conducted some experiments and proved that the ShuffleGhostV3 has smaller trainable parameters and FLOPs with the ensurance of accuracy. And with proper design, it could be more efficient in both GPU and CPU side.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.