Try IDEA

Artificial Intelligence and the Auditor: Are You Ready?

By Deniz Appelbaum and Jeffery Sorenson

 

Many businesses are either adopting Artificial Intelligence (AI) applications or considering their implementation for enterprise processes relevant to financial statement information. That means auditors may soon be required to understand how AI processes can impact the substance underlying that financial information. 

 

At least 55 public companies are now flagging AI applications as a new risk in their annual shareholder reports (Yerak and Shumsky 2019), which auditors must now include in the overall risk assessment process. These firms are disclosing concerns that AI applications may increase ethical, information, reputation, financial, decision, execution, regulatory and legal risks to their enterprises. 1

 

Not only would this require a technical audit of the application, where the auditor should be able to understand and explain how the AI arrives at its predictions and outcomes, but it will also require an assessment of an AI system’s completeness, neutrality and freedom from error and bias. How can an auditor keep her “seat at the table” when faced with the challenge of gaining an understanding of this complex technology? 

 

AI applications that use rules-based systems might not pose major challenges to the audit, in that they might be more understandable as Explainable Artificial Intelligence (XAI). But a “black box” type of AI, such as a neural network, where the machine process is not understandable by humans, would pose a huge challenge for auditors and accountants.2 Because the auditor would not be able to examine and verify the internal decisions, including those with ethical implications, made by the AI, an audit of a black box would automatically be considered high-risk. 

 

It should be noted that the academic AI research community has yet to develop an interface that can concurrently and feasibly explain how a black box AI application arrives at its decisions as it processes data. 

 

What is AI?

 

Two of the leading minds in the field of AI research, Stuart Russell and Peter Norvig, define AI as “intelligent agents” or devices that perceive their environment and take actions that maximize their chance of successfully realizing their objectives (Russell and Norvig 2010). 3

 

AI possesses the potential to take the strength of human knowledge (skills and rules) and apply these insights to gigantic datasets without the human weaknesses of inattention, bias, and fatigue. AI use in many industries has proliferated due to the availability of big data and the power of quantum computing.

 

Fortunately for the individual auditor, advances in information technology, coupled with an abundance of electronic data, gave rise to renewed interest in the applications of AI in auditing and accounting. There are ample examples showing the heightened interest in accounting firms and internal audit departments to harvest the power of AI. 

 

Accounting firms are heavily investing in the development of AI systems, ranging from automation of processes (e.g. Robotic Process Automation, or RPA), to contract analysis, to image recognition (using drones).4 Deloitte and EY have used Natural Language Processing (NLP) in their tax services to expedite their sifting through thousands of legal documents.5 The use of machine learning algorithms to identify outliers and fraudulent records have been among the accounting firms’ favorite AI applications. 

 

Impacts on Internal Auditors

 

As for internal auditors, the mere fact that the Institute of Internal Auditors (IIA) published an AI auditing framework is indicative of the increasing proliferation of AI systems used by internal auditors.6

 

To gain a sense of how businesses are testing and implementing AI, Deloitte annually surveys executives knowledgeable about AI across many industries (Deloitte 2018). The Deloitte analysis reveals the following: 

 

  • early adopters are increasing AI initiatives due to high returns on investment
  • companies should improve risk and change management about these AI investments
  • the right mix of talent and technical expertise is required of IT staff to accelerate AI adoption. 

 

Returns on Investment are being realized by every industry, with some experiencing greater returns and varying investment costs (Deloitte 2018 p 6). AI is being used in many industries for:

 

  • enhancing current products and services
  • optimizing internal processes
  • making better decisions
  • optimizing external operations
  • capturing and applying scarce knowledge
  • pursuing new markets (Deloitte 2018). 

 

Use cases reported in the survey include, but are not limited to, IT automation, quality control, cybersecurity, predictive analytics, customer service, risk management, decision support, and forecasting (Deloitte 2018 p 19). The surveyed executives feel that the cyber, strategic, legal, regulatory, and ethical risks of AI have the potential to materially impact financial statements directly and indirectly (Deloitte 2018 pp 9-13). 

 

This concern for AI and its associated risks is appearing in an increasing number of annual reports (Yerak and Shumsky 2019), from zero to 55 disclosures in the past two years. The industries expressing the strongest concerns about AI risks are the technology, media/communication, insurance and finance sectors. In their latest annual report, Microsoft shares these comments about their AI risk:

 

“Issues in the use of artificial intelligence in our offerings may result in reputational harm or liability. We are building AI into many of our offerings and we expect this element of our business to grow. We envision a future in which AI operating in our devices, applications, and the cloud helps our customers be more productive in their work and personal lives. As with many disruptive innovations, AI presents risks and challenges that could affect its adoption, and therefore our business. AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm. Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.” (Microsoft 2018, p. 28) 

 

Microsoft’s disclosure is indicative of some of the anxieties that a business may face when incorporating AI technologies. How AI demonstrates its actions and thoughts is technique-dependent and is one reason that Microsoft and other businesses are concerned. Although AI is fundamentally defined as intelligent agents that perceive their environment and take actions that best optimize their chances at realizing their goals (Russell and Norvig 2010), more specifically, it consists of expert and rules-based systems, machine learning and neural networks. These techniques range vastly in their complexity, understandability and complexity. 

 

The most understandable of these are rules-based and expert systems that are programmed to execute normative behavior, rules and expertise. More complex methodologies are machine learning techniques. Machine learning is a subset of AI, in that it relies upon statistics, patterns and inferences to learn, as opposed to being given rules, such as those found in expert/rules-based systems. 

 

The most complex form of machine learning is the neural network, a framework that combines many different machine learning approaches in an effort to replicate human thought processes. Neural networks learn from complex data independent of any rules or expertise, in a more exploratory fashion, and are not understandable by humans. 

 

This blog is excerpted from a new CaseWare whitepaper, Keeping Your Seat at the Table in the Age of AI. Download your FREE copy today.

 

 

1 Please see https://www.scu.edu/ethics-in-technology-practice/

2 Please see https://www.darpa.mil/program/explainable-artificial-intelligence 

3 This discussion section of “What is AI?” is heavily influenced by the content from Chapter One of the seminal work by Russell and Norvig, Artificial Intelligence: A Modern Approach 3rd Edition, Pearson.

4 https://economia.icaew.com/news/january-2019/pwc-uses-drone-in-audit-for-first-time

5 https://www.forbes.com/sites/adelynzhou/2017/11/14/ey-deloitte-and-pwc-embrace-artificial-intelligence-for-tax-and-accounting/#24f0239f3498

6 https://na.theiia.org/periodicals/Public%20Documents/GPI-Artificial-Intelligence-Part-II.pdf