Work with AI without risking it all

aat comment

There are risks to be aware of and guard against as AI iterates rapidly.

Working in partnership with AI represents a potential game changer for accountants. However, along with the exciting possibilities come concerning possibilities, such as job displacements, data breaches and algorithmic bias. Overreliance on AI may also overlook human judgment and creativity. Firms need to address these risks in order to thrive in this evolving landscape.

Manage your AI risk

Our e-learning module covers strategies and tools to address risks from artificial intelligence in accounting and finance.

Find out more

Pushing ahead

In common with other large firms, PwC has piloted AI for the review of journals to look for anomalies, but Marc Bena, Digital Audit Leader at the Big Four firm, is cautious about how generative AI such as ChatGPT will impact accountants. “With any AI, the data behind it determines how good your outcome is going to be.”

Tom Allison, Associate Director at Buzzacott, agrees that AI is definitely here to stay, and already changing the accounting landscape. “We’re excited by a thing called DataSnipper which is a tool that many others are starting to use, which looks fantastic,” he says.

“If we send out a request for a batch of invoices, DataSnipper can tell whether it matches, and they’ve done all of your substantive tests for you. And that sounds brilliant to me because that’s the most boring part of the work.”

Allison says AI will come into its own by helping to reduce the process-driven, handle-turning tasks – stuff that typically goes wrong thanks to human error, such as making sure things add up and cross reference correctly.

“Removing that means you can spend more time on the harder, judgement things,” he says.

AI here is not just making judgements, it’s purely helping improve the quality of how we read documents.

Risky business

Managing the risks of AI use will become a core skill for future accountants. Recent research from tech consultancy Cyberhaven has highlighted the increasing trend of employees inputting confidential company data in generative AI tool ChatGPT. It shows that as more workplaces begin to use ChatGPT for routine tasks, the risk of sensitive data being shared is growing too.

Unsurprisingly, the problem is growing at a similar rate as ChatGPT’s penetration of working life. Since ChatGPT launched publicly, Cyberhaven says, 10.8% of knowledge workers have tried using it at least once in the workplace. And, as the figures for the percentage of employees using ChatGPT at work have grown from 5.5% in February to almost 11% in June, so the proportion of those pasting company data into the tool has grown, from 4.2% to 8.6%.

That has already been reflected in the real world, with the news that Samsung discovered employees putting confidential data into ChatGPT, effectively handing it straight to its developer, OpenAI.

This, apparently, included the use of ChatGPT to debug source code as well as “transcripts of internal meetings to summarise them”. Not surprisingly, Samsung banned employees from using ChatGPT shortly thereafter. However, it may be tempted to relax its stance now that OpenAI has launched ChatGPT Enterprise, a version meant for professional settings that counts PwC among its customers and boasts far greater security than its mainstream cousin.

With any AI, the data behind it determines how good your outcome is going to be.

Marc Bena, Digital Audit Leader, PwC

Verifying output

PwC’s Bena is clear that, if anything, as AI develops, accountants will be called on to verify and build trust even more, particularly to provide assurance of its output and ensure its veracity.

“So the way we look at it as a firm is to ask, ‘What’s it going to do?’ Yes, it’s going to transform the way we deliver financial audit; there’s absolutely no doubt about that at all.

“But I think it will also open up opportunities to do a wider scope of audit, whether it’s consultations, cyber, ESG – all the things that really matter to customers.

“Ultimately, their question will be, ‘Can I trust that ChatGPT is telling me all this stuff? Is it sound, or do I need somebody to give me confidence that what’s going out is actually complete, accurate and valid?’. That’s why accountants will still be required.”

Different platforms

AI PlatformWhat it doesWho it might helpThe risks
VIC.AIAutomate invoice processing by using its algorithms to process invoices and expenses that meet a certain confidence threshold to notify approvers.Accountants with large volumes of transactions in need of automation.As AI learns and spots patterns, it will inevitably adopt a subjective approach so accountants must beware and remain vigilant around inputs.
DocytSearches through reams of data and creates workflows based on the content it finds.Accountants engaged with multiple clients and projects that require clarity over resourcing and scheduling.AI can’t fix broken processes, but it can improve slow or inefficient ones. So ensuring that workflows are already in coherent form will avoid exacerbating existing issues.
Otter.aiActs as a meeting assistant, recording audio, syncs calendars, captures slides and summarises discussions.Anyone in a collaborative role aiming to work with functions outside finance.Anything that promises to faithfully record all aspects of human interaction will inevitably create risk of error, so once again sense checking remains critical.
ChatGPTThe best known of the new generation of generative AI tools, ChatGPT is a chatbot that takes instructions from you and provides a detailed response.Accountants engaged in writing reports, proposals or correspondence.In some cases, AI can interpret information incorrectly or use insufficient or outdated information.
Google CloudMost AML has some level of AI-driven transaction monitoring function. Google’s new tool claims to get rid of the need for human intervention and replaces it with a stronger and more intuitive algorithm.Any organisation with AML and KYC exposure.All algorithms rely on high-quality data to make accurate predictions. However, financial institutions may struggle with data quality issues, which may lead to false positives or false negatives.

Manage your AI risk

Our e-learning module covers strategies and tools to address risks from artificial intelligence in accounting and finance.

Find out more

AAT Comment offers news and opinion on the world of business and finance from the Association of Accounting Technicians.

Related articles