Summary
Publicly Available Specification 2050‐2011 (PAS 2050), the Green House Gas Product Protocol (GHGPP) standard and forthcoming guideline 14067 from the International Organization for Standardization (ISO) have helped to propel carbon footprinting from a subdiscipline of life cycle assessment (LCA) to the mainstream. However, application of carbon footprinting to large portfolios of many distinct products and services is immensely resource intensive. Even if achieved, it often fails to inform company‐wide carbon reduction strategies because footprint data are disjointed or don't cover the whole portfolio. We introduce a novel approach to generate standard‐compliant product carbon footprints (CFs) for companies with large portfolios at a fraction of previously required time and expertise. The approach was developed and validated on an LCA dataset covering 1,137 individual products from a global packaged consumer goods company. Three novel techniques work in concert in a single approach that enables practitioners to calculate thousands of footprints virtually simultaneously: (i) a uniform data structure enables footprinting all products and services by looping the same algorithm; (ii) concurrent uncertainty analysis guides practitioners to gradually improve the accuracy of only those data that materially impact the results; and (iii) a predictive model generates estimated emission factors (EFs) for materials, thereby eliminating the manual mapping of a product or service's inventory to EF databases. These autogenerated EFs enable non‐LCA experts to calculate approximate CFs and alleviate resource constraints for companies embarking on large‐scale product carbon footprinting. We discuss implementation roadmaps for companies, including further road‐testing required to evaluate the effectiveness of the approach for other product portfolios, limitations, and future improvements of the fast footprinting methodology.
Abstract-Congestion caused by a large number of interacting TCP flows at a bottleneck network link is different from that caused by a lesser number of flows sending large amounts of data -the former would require cutting down the number of competing flows, while cutting down the data sending rate is sufficient for the latter. However, since existing congestion control schemes view congestion only from a packet-level perspective, they treat both to be the same, resulting in suboptimal performance.We propose two best effort, search-based, session (or flow) level congestion control strategies for the Internet, to complement existing packet-level congestion control schemes. Our strategies control the number of competing flows to optimize for the flow completion rate and the flow completion time. Furthermore, our session control mechanisms do not require any per-flow state or computation at the routers, make no assumption about input traffic characteristics and requirements, avoid starvation of new flows when existing flows do not leave the system, and do not require any end host TCP modifications. Using evaluations under a wide variety of static and varying traffic load conditions, we demonstrate the significant performance and fairness gains that our session control mechanisms provide.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.