1997
DOI: 10.1520/jfs14207j
|View full text |Cite
|
Sign up to set email alerts
|

Writer Identification by Professional Document Examiners

Abstract: Reliable data on the capabilities of professional document examiners are scarce, rendering most past characterizations of these capabilities somewhat speculative. We report on a comprehensive test administered to more than 100 professional document examiners, intended to close this data gap in the area of writer identification. Each examiner made 144 pair-wise comparisons of freely-created original handwritten documents. The task was to determine whether or not a "match" was detected, namely whether or not the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
36
0

Year Published

2007
2007
2019
2019

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 68 publications
(40 citation statements)
references
References 5 publications
4
36
0
Order By: Relevance
“…We caution the reader that the proficiency of professional document examiners compared to that of laypersons in detection forgeries remains a controversial topic (see for example, [6], [31]). Indeed, while recent studies [11], [12] seem to indicate that a well-trained subset of the population can perform significantly better than chance at this task, these results are still being openly debated. For that reason, we primarily use our analysis of human judges as yet another indication of the importance of using strong forgers for evaluation purposes.…”
Section: An Alternative Perspective: Forgery Detection By Humansmentioning
confidence: 99%
“…We caution the reader that the proficiency of professional document examiners compared to that of laypersons in detection forgeries remains a controversial topic (see for example, [6], [31]). Indeed, while recent studies [11], [12] seem to indicate that a well-trained subset of the population can perform significantly better than chance at this task, these results are still being openly debated. For that reason, we primarily use our analysis of human judges as yet another indication of the importance of using strong forgers for evaluation purposes.…”
Section: An Alternative Perspective: Forgery Detection By Humansmentioning
confidence: 99%
“…It was found that the control group scored a mean of 4.0 (SEM = 1.8) and thus showed a weak ability to differentiate between stimuli, whilst the FDEs scored a mean of 8.5 (SEM = 1.2) and performed significantly better on this task than the control group (independent samples t ‐test, t = 2.465, df = 18, p = 0.024). Thus, there is evidence that subjects can discriminate between forged and disguised signatures, and that the expertise effect of trained FDEs reported in previous studies (1–6) is also evident in this type of task. Of interest, one FDE subject was able to achieve a very high score of 13 on this test (13 correct calls, three inconclusive calls, and zero errors), indicating that with the test signature under study, it was possible for the visual and cognitive system of a highly skilled FDE to very accurately discriminate between the forged and disguised signatures.…”
Section: Resultsmentioning
confidence: 50%
“…As professional QDEs have been shown to perform better than lay persons (22), we can also conclude that their performance would be superior to that of the current system. One caveat in our results comparing the performance of humans and machines is that the present testing was on a small set of 12 individuals; a larger scale testing would be needed to compare the absolute performances of QDEs, lay persons, and the machine.…”
Section: Comparison With Human Performancementioning
confidence: 67%