Finding an ethical balance in AI

aat comment

As AI develops, so do the ethical considerations that go with its use. We examine some of the questions accountants will face.

Concerns about the rapid growth of AI prompted thousands of tech leaders, including X (formerly known as Twitter) owner Elon Musk and Apple co-founder Steve Wozniak, to sign an open letter calling for the training of powerful AI systems to be suspended in response to fears of a threat to humanity.

While the increasing adoption of generative AI tools by accountants may not herald the onset of AI Armageddon, its use does pose several commercial and ethical risks that practitioners will need to grapple with.

“Technology – especially AI – presents us with new ethical issues. Or perhaps the same fundamental issues we have always faced in new packaging,” according to former ICAEW president Paul Aplin, a former tax partner at AC Mole with 40 years’ experience. 

Discover how to use AI in accounting ethically

Learn about accountability, transparency and privacy when using artificial intelligence.

Find out more

For instance, if you get an unexpected – and convincingly justified – answer using generative AI to advise a client, how far should you consider whether accepting that answer is ethical?

“I think the answer is as it always was: I ask myself ‘do I honestly believe that this is the right answer?’ If I do, then I can proceed, if I don’t then I either ask for a second opinion from someone I trust or I simply don’t proceed,” Aplin said.

“Over the years I’ve found that the ‘does it feel right?’ test is pretty reliable.”

Privacy and data security

Accountants have been finding innovative ways to use systems like ChatGPT (an algorithm that interprets questions and generates coherent, human-like answers). But they have been faced with ethical concerns about security and accuracy.

One of the biggest challenges is the rights of content owners. AI systems may collect and process large amounts of personal data, raising concerns about privacy, the security of that data and the legality of its use.

There are also concerns over employees uploading sensitive data. Several major companies have recently cracked down on the workplace use of AI services amid concerns that sensitive internal information is being leaked on such platforms.

Bloomberg reported in May that Samsung Electronics banned the use of ChatGPT and other AI-powered chatbots after employees inadvertently revealed sensitive information to the chatbot.

Amazon issued a similar warning to its workers in January, while JPMorgan Chase also restricted the use of ChatGPT amid concerns that it may face potential regulatory risks surrounding the sharing of sensitive financial information.

“Data protection is critical,” Rob Hackney, tax manager at DSG Chartered Accountants, said. “There are fewer limitations on what we can do with company data but with individuals you need to make sure you have proper consent for any processing done on it.”

Hallucinations and bias

There are also major concerns around ‘hallucinations’, which are confidently stated untruths, which can often go undetected until it is too late.

“The problem with generative AI is that the answers are so articulate and confident,” Aaron Harris, Sage’s chief technology officer, said. “We have to be absolutely certain that we are not using generative AI in a way that hallucinations can cause harm.”

“There are fewer limitations on what we can do with company data but with individuals you need to make sure you have proper consent for any processing done on it.”

AI algorithms can also inadvertently reflect biases in the data they were trained on, leading to discriminatory outcomes or unfair treatment. 

Generative AI adds to the known AI threat another dimension around accuracy and potential misinformation which puts the governance and accountability of model deployment a vital business focus, according to Hassane Ferdaous, digital audit partner, at PwC.

He added that: “When you have interactions with customers, and you build in a solution that is AI-driven, you want to disclose to your client that the answers and services have been generated by AI.”

Transparency

Vsu Subramanian, who heads up Avalara’s AI initiatives, also believes that transparency is important when AI is used.

For instance, he suggested that if accountants use AI internally, they should annotate their information to say it was generated by AI.

“You still need to have those policies, people need to know where the information has originated,” Subramanian said.

“Now that it is possible that some of it was generated by a generative AI system giving accountants and their clients new capabilities to scale, a layer of human validation is necessary to review and verify what has been generated.”

According to Stuart Miller, head of product compliance and industry engagement at Xero, this could extend to disclosures included in engagement letters.

“The use of AI should form part of the standard engagement letter and therefore becomes a task within the practices themselves to make sure their clients are aware they are using these tools,” Miller said.

Some AI systems, especially deep learning models, are often considered black boxes, making it difficult to understand the rationale behind their decisions.

Accountability

“If you allow an all-knowing, very broad algorithm to maximise profit it is frightening to consider the actions it may take to achieve that objective including questionable ethical and environmental decisions,” Mark McDonald, senior director at technology research and advisory firm Gartner, said.

Consequently, it is critical that accountants retain accountability for their decisions.

Professional services firms will need to curate what they are using with both the public data, but also their internal model and keep pruning and checking that what it is producing is correct and accurate, according to James Osborn, chief digital officer at KPMG UK.

“That accountability as an individual needs to be retained,” he added, noting that “it is simply a decision-making enhancer; it doesn’t reproduce you as a human”.

The dilemma of AI supervision

AI is volatile and complex. Safeguards are needed now, but developing the right approach is not easy. Here’s a comparison of the UK’s approach vs. that of the EU.

UK approach

The UK government will prioritise flexibility and make use of existing regulators.

No/low legislation

Establish a framework of principles and sector-specific regulations instead of umbrella legislation.

Use existing regulators

The likes of the ICO, the CMA, the FCA and Ofcom will provide non-statutory guidance and tools.

Context-specific

The overall framework will define AI, but regulation will be context-specific.

Pros

The approach will be agile and adaptable, allowing for quick adjustments to keep up with the evolving AI landscape.

Cons

Regulations may not be implemented in a cohesive manner, risking a piecemeal approach. This could lead to enforcement problems.

EU approach

The EU approach will be risk-based and legislative, addressing a hierarchy of possible threats.

Unacceptable risk

Applications that could harm EU citizens’ health, safety, or fundamental rights will be banned. Example: social scoring.

High Risk

AI that could cause harm will be comprehensively regulated. Examples: educational admissions or law enforcement.

Limited Risk

Other AI applications will receive a lighter touch, such as voluntary codes of conduct and labelling mechanisms.

Pros

Will cover a wide range of AI risks, set high standards and create harmonized rules for the EU.

Cons

The legislative process may be too cumbersome compared with the speed of AI’s development. Risk of inconsistent implementation by EU states.

Key ethical threats

Data privacy

With excessive data collection and management, AI could inadvertently breach privacy protocols, leading to unnecessary exposure of sensitive information.

Accountability

If AI systems make mistakes or cause harm, determining who is responsible can be difficult. This presents a unique conundrum when it comes to liability.

Bias and fairness

AI algorithms can inherit and amplify societal biases present in the training data. For example, in credit scoring, an AI system can be biased against a particular group, leading to unfair outcomes.

Transparency

AI often operates as a “black box,” making it hard to understand how it arrived at a particular decision. This could potentially undermine trust in accounting services and output.

Dependence

Over-reliance on AI could lead to the loss of critical thinking skills that human accountants traditionally offer. For instance, while an AI system can analyse data faster, it might lack the ability to understand the context or nuances behind the numbers.

Regulatory challenges

AI can disrupt regulatory practices in accounting, challenging existing oversight and control mechanisms. AI may not fit into existing regulatory frameworks, requiring new laws and policies.

Discover how to use AI in accounting ethically

Learn about accountability, transparency and privacy when using artificial intelligence.

Find out more

AAT Comment offers news and opinion on the world of business and finance from the Association of Accounting Technicians.

Related articles