We present a tool for modeling the performance of methane leak detection and repair programs that can be used to evaluate the effectiveness of detection technologies and proposed mitigation policies. The tool uses a two-state Markov model to simulate the evolution of methane leakage from an artificial natural gas field. Leaks are created stochastically, drawing from the current understanding of the frequency and size distributions at production facilities. Various leak detection and repair programs can be simulated to determine the rate at which each would identify and repair leaks. Integrating the methane leakage over time enables a meaningful comparison between technologies, using both economic and environmental metrics. We simulate four existing or proposed detection technologies: flame ionization detection, manual infrared camera, automated infrared drone, and distributed detectors. Comparing these four technologies, we found that over 80% of simulated leakage could be mitigated with a positive net present value, although the maximum benefit is realized by selectively targeting larger leaks. Our results show that low-cost leak detection programs can rely on high-cost technology, as long as it is applied in a way that allows for rapid detection of large leaks. Any strategy to reduce leakage should require a careful consideration of the differences between low-cost technologies and low-cost programs.
Reducing methane emissions from oil and gas systems is a central component of US and international climate policy. Leak detection and repair (LDAR) programs using optical gas imaging (OGI)-based surveys are routinely used to mitigate fugitive emissions or leaks. Recently, new technologies and platforms such as planes, drones, and satellites promise more cost-effective mitigation than existing approaches. To be approved for use in LDAR programs, new technologies must demonstrate emissions mitigation equivalent to existing approaches. In this work, we use the FEAST modeling tool to (a) identify cost vs mitigation trade-offs that arise from using new technologies and (b) provide a framework for effective design of alternative LDAR programs. We identify several critical insights. First, LDAR programs can trade sensitivity for speed without sacrificing mitigation outcomes. Second, low sensitivity or high detection threshold technologies have an effective upper bound on achievable mitigation that is independent of the survey frequency. Third, the cost effectiveness of tiered LDAR programs using site-level detection technologies depends on their ability to distinguish leaks from routine venting. Finally, "technology equivalence" based on mitigation outcomes differs across basins and should be evaluated independently. The FEAST model will enable operators and regulators to systematically evaluate new technologies in nextgeneration LDAR programs.
Synthetic otolith marks are used at hundreds of hatcheries throughout the Pacific Rim to record the release location of salmon. Each year, human readers examine tens of thousands of otolith samples to identify the marks in salmon that are caught. The data inform dynamic management practices that maximize allowable catch while preserving populations, and guide hatchery investments. However, the method is limited by the time required to process otoliths, the inability to distinguish between wild and un-marked hatchery fish, and in some cases classification processes are limited by the subjective decisions of human readers. Automated otolith reading using computer vision has the potential to improve on all three of these limitations. Our work advances the field of automated otolith reading through a novel otolith classification algorithm that uses two neural networks trained with an adversarial algorithm to achieve 93% classification accuracy between four hatchery marks and unmarked otoliths. The algorithm relies on hemisection images of the otolith exclusively: no additional biological data are needed. Our work demonstrates a novel technique with modest training requirements that achieves unprecedented accuracy. The method can be easily adopted in existing otolith labs, scaled to accommodate additional marks, and does not require tracking additional information about the fish that the otolith was retrieved from.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.