2019
DOI: 10.1111/csp2.54
|View full text |Cite
|
Sign up to set email alerts
|

Quantifying data quality in a citizen science monitoring program: False negatives, false positives and occupancy trends

Abstract: Data collected by volunteers are an important source of information used in species management decisions, yet concerns are often raised over the quality of such data.Two major forms of error exist in occupancy datasets; failing to observe a species when present (imperfect detection-also known as false negatives), and falsely reporting a species as present (false-positive errors). Estimating these rates allows us to quantify volunteer data quality, and may prevent the inference of erroneous trends. We use a new… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(27 citation statements)
references
References 64 publications
1
26
0
Order By: Relevance
“…Also the risk of misidentification varies depending on both rarity and observer skills, with more false‐positive reports of rare species by skilled observers but more false‐positive reports of common species by less experienced observers (Farmer, Leonard, & Horn, 2012). The impact of misidentifications on model performance is difficult to assess and vary between studies (Cruickshank, Bühler, & Schmidt, 2019; Ruiz‐Gutierrez, Hooten, & Campbell Grant, 2016). We assume that the risk of misidentification of rare species is quite low in our dataset due to high self‐validation control carried out by the bird watching community.…”
Section: Discussionmentioning
confidence: 99%
“…Also the risk of misidentification varies depending on both rarity and observer skills, with more false‐positive reports of rare species by skilled observers but more false‐positive reports of common species by less experienced observers (Farmer, Leonard, & Horn, 2012). The impact of misidentifications on model performance is difficult to assess and vary between studies (Cruickshank, Bühler, & Schmidt, 2019; Ruiz‐Gutierrez, Hooten, & Campbell Grant, 2016). We assume that the risk of misidentification of rare species is quite low in our dataset due to high self‐validation control carried out by the bird watching community.…”
Section: Discussionmentioning
confidence: 99%
“…Carefully considering sources of variation in both detection and occupancy probabilities, as well as including random effects can help ensure that all sources of heterogeneity in the data is accounted for. The issue of false identifications has been well-documented in the citizen science literature (Ruiz-Gutierrez et al 2016, Cruickshank et al 2019), which can be addressed in the modeling process (Miller et al 2011). Finally, a key decision of adapting unstructured data for use in an occupancy-detection framework requires careful consideration of how to define spatial units and surveys.…”
Section: Discussionmentioning
confidence: 99%
“…However, the situation is not really different from field observations of species; an approach to which practitioners are well used. Species can also be misidentified and overlooked in the field, leading to false negatives and false positives in observations as well (Cruickshank et al 2019). In the workflow for the detection of amphibian species with eDNA from water samples by metabarcoding we thus decided to actively communicate uncertainties in species identification, by introducing a sample-specific threshold of the required number of sequencing reads for certain species detection (presence) or uncertain species detection (uncertain detection), respectively (see above).…”
Section: Challenges Met During the Development Of The Workflowsmentioning
confidence: 99%