2022
DOI: 10.1111/1556-4029.15031
|View full text |Cite|
|
Sign up to set email alerts
|

Blind testing in firearms: Preliminary results from a blind quality control program

Abstract: Open proficiency tests meet accreditation requirements and measure examiner competence but may not represent actual casework. In December 2015, the Houston Forensic Science Center began a blind quality control program in firearms examination. Mock cases are created to mimic routine casework so that examiners are unaware they are being tested. Once the blind case is assigned to an examiner, the evidence undergoes microscopic examination and comparison to determine whether the fired evidence submitted was fired … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 15 publications
0
14
0
Order By: Relevance
“…The tendency for the experts in these studies to disproportionately apply inconclusive decisions to different-source evidence, although not universal (it does not seem to characterize DNA evidence, for instance; Butler et al, 2018), is not an anomaly. Since publication of the PCAST report, other forensic validation studies that also met all or most of the PCAST’s criteria for scientific rigor have replicated the pattern for cartridge cases (Guyll et al, 2023; Monson et al, 2023; Neuman et al, 2022) and shown it to also characterize bullets, footwear, handwriting, and palm print evidence (Eldridge et al, 2021; Hicklin, Eisenhart, et al, 2022; Hicklin, McVicker, et al, 2022; Monson et al, 2023; Neuman et al, 2022). Moreover, an analytic review of the polygraph showed an analogous pattern whereby polygraphers applied inconclusive decisions to truthful suspects nearly three times more often than to deceptive suspects (Honts & Schweinle, 2009).…”
Section: Forensic Validation Studiesmentioning
confidence: 88%
“…The tendency for the experts in these studies to disproportionately apply inconclusive decisions to different-source evidence, although not universal (it does not seem to characterize DNA evidence, for instance; Butler et al, 2018), is not an anomaly. Since publication of the PCAST report, other forensic validation studies that also met all or most of the PCAST’s criteria for scientific rigor have replicated the pattern for cartridge cases (Guyll et al, 2023; Monson et al, 2023; Neuman et al, 2022) and shown it to also characterize bullets, footwear, handwriting, and palm print evidence (Eldridge et al, 2021; Hicklin, Eisenhart, et al, 2022; Hicklin, McVicker, et al, 2022; Monson et al, 2023; Neuman et al, 2022). Moreover, an analytic review of the polygraph showed an analogous pattern whereby polygraphers applied inconclusive decisions to truthful suspects nearly three times more often than to deceptive suspects (Honts & Schweinle, 2009).…”
Section: Forensic Validation Studiesmentioning
confidence: 88%
“…Verification drastically mitigates the chance of misidentification, offering collateral opportunities to motivate corrective action and to impose remedial training for examiners whose conclusions must be reversed after verification. Blind proficiency testing of individual examiners is challenging to implement but has been successfully demonstrated [5,14,[23][24][25][26]. Analogously, double reading, especially when blinded, is a validated method to reduce error and increase specificity and selectivity in mammography [27].…”
Section: R E P L Y Authors' Response To Gutierrez Et Al Commentary Onmentioning
confidence: 99%
“…But the data discussed in this commentary underscore that any empirical measures of any particular witness's accuracy must be obtained at the individual-examiner level rather than assumed from assessments of method performance overall. Obvious from Table 3, judges cannot merely utilize, when assessing qualifications as opposed to method validity, the accuracy rates reported in studies of firearms examination (even accounting for confidence intervals) because they do not bound the performance of any given witness: the accuracy rates of to response data for other scholars to assess [1,21]; and (2) judges cannot rely on existing proficiency testing data because these tests are almost all declared and not difficult enough to provide meaningful assessments of accuracy on casework samples [19,22,23].…”
Section: 8%mentioning
confidence: 99%