2014
DOI: 10.5430/afr.v3n2p191
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Newcomb-Benford Digital Frequency Anomalies in the Audit Context: Suggested Chi2 Test Possibilities

Abstract: Digital Frequency Testing [DFT] has achieved justifiable currency as a valuable part of the panoply of the auditor. There are a number of inference models, such as the parametric test for proportional differences to entropic screening, that can be used to create information regarding the use of extended procedures to investigate the difference between the Observed digital frequency compared to the Expectation benchmark. One inferential model which seems ideal for DFT in the audit context is the Chi2 model as … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
6

Relationship

4
2

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…In this regard, Lusk & Halperin (2014b) argue that if the overall computed is > 15.507 which is the 95% inferential cut-off, that the dataset is Non-Conforming in nature. Also in this regard, the sample size anomaly does not come into play as the sample sizes were projected using the upper limit suggested by Lusk & Halperin (2104b) of 440.…”
Section: Robustness Testing Of the Principal Bpp Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this regard, Lusk & Halperin (2014b) argue that if the overall computed is > 15.507 which is the 95% inferential cut-off, that the dataset is Non-Conforming in nature. Also in this regard, the sample size anomaly does not come into play as the sample sizes were projected using the upper limit suggested by Lusk & Halperin (2104b) of 440.…”
Section: Robustness Testing Of the Principal Bpp Resultsmentioning
confidence: 99%
“…"Listing" is a critical accrual criterion as this suggests that there was no evidence that the data generating processes of the firm were inappropriately modified, corrupted, or constrained so as not to be representative of generating processes that would be expected to produce data that would conform to the Newcomb-Benford profile. Here the research of Ley (1996); Nigrini and Mittermaier (1997); Durtschi, Hillison and Pacini (2004); Reddy and Sabastin (2012) and Lusk and Halperin (2014b) taken together suggest that Corrupted data generating processes often do not produce data that follows…”
Section: The Datasets Used To Test the Mixing Transitionmentioning
confidence: 96%
“…What still governs is the overall chi-square; this is the only statistically-based inference signal that can be used. Finally, it is also the case that direct benchmarking creates a risk for the FP error anomaly as illustrated by Lusk and Halperin (2014b;2014c) where they argue for two random samples with sample size control in the range [315 to 440]. This then rationalizes the various cell profiles that we will now use in our ARL profiling.…”
Section: Profiling Screening Recommendationsmentioning
confidence: 89%
“…The BPP was derived using Benford's 20 datasets that were realizations from many different experiential-i.e., real "contexts" -and so embodies the natural variation that may aid the SCB analyst in focusing on practical differences in comparative profiles, and 2. Using the Benford datasets an interval screening test, see Table 1 (BSW: Cols 3 & 4), developed by Lusk and Halperin (2014a;2014b;2014c) will greatly facilitate profile differentiation.…”
Section: A Practical Extension Of the Log 10 Profilementioning
confidence: 99%
“…Therefore, our testing result here can be used to verify the acuity of the NBDSSP in relation to the FNSE, especially as we narrow down the sample sizes to the range of 250 to 440 which are the range intervals for the low-end test by BLHL and the high-end tested by Lusk and Halperin (2014b). If the Hill dataset were to have been scored as Confirming at a high frequency for smaller sample sizes then this would call into question the acuity of the NBDSSP relative to the FNSE.…”
Section: The False Negative Signaling Error: Incorrectly Believing Thmentioning
confidence: 99%