The emergence of low-cost, user-friendly and very compact air pollution platforms enable observations at high spatial resolution in near-real-time and provide new opportunities to simultaneously enhance existing monitoring systems, as well as engage citizens in active environmental monitoring. This provides a whole new set of capabilities in the assessment of human exposure to air pollution. However, the data generated by these platforms are often of questionable quality. We have conducted an exhaustive evaluation of 24 identical units of a commercial low-cost sensor platform against CEN (European Standardization Organization) reference analyzers, evaluating their measurement capability over time and a range of environmental conditions. Our results show that their performance varies spatially and temporally, as it depends on the atmospheric composition and the meteorological conditions. Our results show that the performance varies from unit to unit, which makes it necessary to examine the data quality of each node before its use. In general, guidance is lacking on how to test such sensor nodes and ensure adequate performance prior to marketing these platforms. We have implemented and tested diverse metrics in order to assess if the sensor can be employed for applications that require high accuracy (i.e., to meet the Data Quality Objectives defined in air quality legislation, epidemiological studies) or lower accuracy (i.e., to represent the pollution level on a coarse scale, for purposes such as awareness raising). Data quality is a pertinent concern, especially in citizen science applications, where citizens are collecting and interpreting the data. In general, while low-cost platforms present low accuracy for regulatory or health purposes they can provide relative and aggregated information about the observed air quality.
Recent developments in sensory and communication technologies have made the development of portable air-quality (AQ) micro-sensing units (MSUs) feasible. These MSUs allow AQ measurements in many new applications, such as ambulatory exposure analyses and citizen science. Typically, the performance of these devices is assessed using the mean error or correlation coefficients with respect to a laboratory equipment. However, these criteria do not represent how such sensors perform outside of laboratory conditions in large-scale field applications, and do not cover all aspects of possible differences in performance between the sensor-based and standardized equipment, or changes in performance over time. This paper presents a comprehensive Sensor Evaluation Toolbox (SET) for evaluating AQ MSUs by a range of criteria, to better assess their performance in varied applications and environments. Within the SET are included four new schemes for evaluating sensors' capability to: locate pollution sources; represent the pollution level on a coarse scale; capture the high temporal variability of the observed pollutant and their reliability. Each of the evaluation criteria allows for assessing sensors' performance in a different way, together constituting a holistic evaluation of the suitability and usability of the sensors in a wide range of applications. Application of the SET on measurements acquired by 25 MSUs deployed in eight cities across Europe showed that the suggested schemes facilitates a comprehensive cross platform analysis that can be used to determine and compare the sensors' performance. The SET was implemented in R and the code is available on the first author's website.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.