SYNOPSIS This paper addresses information processing weaknesses and limitations that can impede the effective use and analysis of Big Data in an audit environment. Drawing on the literature from psychology and auditing, we present the behavioral implications Big Data has on audit judgment by addressing the issues of information overload, information relevance, pattern recognition, and ambiguity. We also discuss the challenges that auditors encounter when incorporating Big Data in audit analyses and the various analytical tools that are currently used by companies in the analysis of Big Data. The manuscript concludes by raising questions that future research might address related to utilizing Big Data in auditing.
After decades of frustration with long “AI Winters,” various business industries are witnessing the arrival of AI's “Spring,” with its massive and compelling benefits. Auditing will also evolve with the application of AI. Recently, there has been a progressive evolution of technology aimed at creating “artificially intelligent” devices. Although this evolution has been permeated with false starts and exaggerated claims, there is some convergence on the fact that substantive progress has been obtained in the last few years with the adoption of deep learning in conjunction with much faster machines and dimensionally larger storage spaces (and samples). The area of auditing has lagged business adoption in the past (Oldhouser 2016), but is prime for partial automation due to its labor intensiveness and range of decision structures. Several accounting firms have disclosed substantive investments in the AI fields. This paper proposes various areas of AI-related research to examine where this emerging technology is most promising. Moreover, this paper raises a series of methodological and evolutionary research questions aiming to study the AI-driven transformation of today's world of audit into the assurance of the future.
External auditors and management increasingly rely on control risk assessments conducted by internal auditors. Consequently, it is crucial to ensure the quality of such assessments and identify irregular instances that deviate from the normal pattern of assessments. Moreover, processing and prioritizing a large number of outlying internal auditors' assessments can help their superiors as well as external auditors overcome the human limitations of dealing with information overload and direct their investigations toward the more suspicious cases, consequently improving overall audit efficiency. In this paper, we use historic data consisting of control risk assessments procured from the internal audit department of a multinational consumer products company. It is used to infer an ordered logistic regression model to provide a quality review of internal auditors' and business owners' assessments of internal controls. We identify anomalous cases where the assessment does not conform to the expected value and develop a methodology to prioritize these outliers. The results indicate that the proposed model can serve as a quality review tool, thus improving audit efficiency, as well as a learning tool that non-experts can employ to gain expert-like knowledge. Additionally, the proposed ranking metrics proved effective in helping the auditors focus their efforts on the more problematic audits.
ChatGPT, a language-learning model chatbot, has garnered considerable attention for its ability to respond to users’ questions. Using data from 14 countries and 186 institutions, we compare ChatGPT and student performance for 28,085 questions from accounting assessments and textbook test banks. As of January 2023, ChatGPT provides correct answers for 56.5 percent of questions and partially correct answers for an additional 9.4 percent of questions. When considering point values for questions, students significantly outperform ChatGPT with a 76.7 percent average on assessments compared to 47.5 percent for ChatGPT if no partial credit is awarded and 56.5 percent if partial credit is awarded. Still, ChatGPT performs better than the student average for 15.8 percent of assessments when we include partial credit. We provide evidence of how ChatGPT performs on different question types, accounting topics, class levels, open/closed assessments, and test bank questions. We also discuss implications for accounting education and research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.