We identify a pervasive, yet previously undocumented threat to the reliability of MTurk data—and discuss how this issue is symptomatic of opportunities and incentives that facilitate fraudulent behavior within online recruitment platforms. In doing so, we explain how IP addresses were never intended to identify individuals and are likely insufficient to identify and mitigate emergent risks around data integrity. We discuss MTurk samples for two studies that include alarming proportions of participants who circumvent an entire set of conventional sample screening methods—and provide disturbingly low-quality responses. These “bad actors” exploited inherent limitations of IP screening procedures by using virtual private servers (VPS) that concealed the IP address of their local devices. While service providers now help target this abuse, the underlying limitations of IP screening procedures remain. Our findings emphasize the importance of continued diligence within the research community to identify and mitigate evolving threats to data integrity.
Audit firms are investing billions of dollars to develop artificial intelligence (AI) systems that will help auditors execute challenging tasks (e.g., evaluating complex estimates). Although firms assume AI will enhance audit quality, a growing body of research documents that individuals often exhibit "algorithm aversion"-the tendency to discount computer-based advice more heavily than human advice, although the advice is identical otherwise. Therefore, we conduct an experiment to examine how algorithm aversion manifests in auditor judgments. Consistent with theory, we find that auditors receiving contradictory evidence from their firm's AI system (instead of a human specialist)
Regulators now require auditors to provide information about how they evaluate complex estimates. Because users encounter this auditor-provided information alongside management-provided information, we jointly examine the value relevance of these disclosures. We also examine whether visual cues in audit reports influence how nonprofessional investors use these disclosures. We find that disclosures from managers and auditors provide different value-relevant information about the same underlying issue. While users struggle to weight fully narrative auditor disclosures in their valuation judgments without corresponding management disclosures, visual cues facilitate their weighting of information about the audit. Specifically, users take increased price protection when auditor disclosures also include visual cues. However, consistent with market signaling theory, corresponding voluntary disclosures from management attenuate this price protection. This suggests management can mitigate negative valuation effects that may arise from auditor disclosures, and implies that visual cues in audit reports can prompt managers to increase disclosure transparency.
Data Availability: Contact the authors.
SYNOPSIS:
The purpose of this study is to guide practice and future research by examining contemporary fraud brainstorming practices. Using field data collected from audits conducted during 2013–2014, we investigate team characteristics, attendance and communication, brainstorming structure, timing, and effort, and brainstorming quality. Results show that although some practices are similar to those reported in earlier field studies, there are interesting differences (e.g., decreased use of checklists, shorter sessions, and risk-based deployment of resources in brainstorming). These differences suggest brainstorming has evolved throughout the intervening period during which new audit standards became effective and the PCAOB criticized auditors' performance in fraud risk identification and risk response generation. We also examine differences in audit team characteristics and brainstorming practices across risk and trading-status partitions. Results reveal auditors deploy more resources to brainstorming when engagement risk is heightened (i.e., publicly traded clients with high fraud and/or high inherent risk); correspondingly, brainstorming quality is higher on these engagements. Collectively, our findings indicate risk-based resource allocations in brainstorming and diligent responses to regulatory concerns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.