By Caroline Roberts Artificial intelligence How are you managing the risks of using artificial intelligence? 5 May 2026 Data leaks, false information and client reliance: AI’s benefits come with downsides. What does that look like in practice? Accountants and bookkeepers are increasingly embracing AI. A 2025 survey of UK practitioners by professional services firm Wolters Kluwer found that 91% were either using some form of the technology or were planning to do so in the near future. Much accounting software now incorporates AI features, and it is being used for a wide variety of purposes, including: automating routine tasks, such as processing invoices and reconciling transactions drafting reports and customer communications summarising long documents creating marketing content, such as newsletters and social media posts shaping practice strategy by using its forecasting powers or its ability to spot trends that might present new business opportunities. But this fast-moving and exciting technology is not without its risks. For example, AI is only as good as the material used to train it so may produce biased information. When it lacks the data to come up with an answer, it may ‘hallucinate’, or produce outputs that sound plausible but are incorrect. Also, entering sensitive information into AI tools can result in leaks and breaches of data security. Professional Conduct in Relation to Taxation bodies have published guidance on how to use AI in tax. Our research found two in five people would consider a career change to accountancy if they could use AI to do routine tasks. We asked accountants how they’re managing the risks that come with AI in their practice. We ensure there’s always a human in the loop Bev Flanagan MAAT, owner, Bev Flanagan Financial Ltd The biggest risk is people assuming that because something sounds polished and confident, it must be correct. AI can produce answers that look convincing but are factually wrong, incomplete, or lacking context. In accountancy, that’s obviously a major concern. Another risk is that people may stop thinking critically if they become too dependent on it. The key is to treat AI as a starting point, but anything technical, related to compliance or tax, or client specific must still be reviewed by a qualified human before being used. I’m also very mindful about what data is entered into AI tools and would never input confidential client information without being sure it is secure and appropriate to do so. AI can absolutely support the work, but professional oversight still matters hugely. Longer term, it is about staying educated, setting clear rules around usage, and making sure AI is built into the practice in a sensible way. That means reviewing which tools are actually useful, keeping data security front of mind, team training, and regularly checking where it adds value and where it does not. I think practices need to embrace AI because it’s not going away, but they also need to do that responsibly. AI can be a brilliant support to a modern practice, but good judgement, ethics, and human oversight will remain essential. We have an important role advising clients how to use AI safely Craig Dyer MAAT AATQB, owner, C A Dyer Accounts & Bookkeeping Clients using AI for bookkeeping is a concern. If AI is utilised well and set up correctly in bookkeeping software, there is no reason for it to fail. But without a clear understanding of what the system needs it could lead to surprise bills for clients and a lot more work to undo inaccurate bookkeeping. This is where the shift from bookkeeping to advising will become important for licensed AAT members. Overall, we’ve taken a cautious and controlled approach by restricting how and where AI is used within the practice. We’re providing opportunities for staff to undertake AI training and become more secure and confident in using it, and we plan to work with a local independent AI software engineer to help support the development of our internal systems and processes. Going forward, we expect to evolve in line with the wider industry, and will continue to invest in staff training to ensure our team remains up to date with best practices and emerging technologies. The accounting industry will need to adapt to AI use, just as it did when spreadsheets and bookkeeping software became more prevalent. We’re very mindful of how AI could compromise data protection Stephanie Marshall FICB PM.Dip, founder and CEO, A B C S UK There’s a risk around GDPR and data protection if tools aren’t used properly. In an industry where we hold so much financial and personal data, it’s really important we know we are looking after it properly. We avoid inputting sensitive client data into unsecured tools and make sure we are clear on where AI fits and where it doesn’t. Our approach is to treat AI as a junior colleague rather than an expert. Everything is reviewed by a qualified professional before it goes anywhere near a client. Overall, the benefits do outweigh the risks, but only when used properly. It can save time, improve consistency and allow accountants and bookkeepers to focus more on advisory work, which is where we can add the most value. Longer term, I think the industry needs to continue to build clear internal processes around AI use at both a practice and professional body level. We will continue keeping a strong human layer over everything. The industry needs clear processes to ensure we’re securing client data properly. We’re focusing on training staff in the appropriate use of AI Alex Brearley, Director, Brearley & Co Accountants We are currently using AI to support research and provide a foundation for client work and reports, particularly in taxation. It is also being used to transcribe and summarise meetings, as well as to improve the structure and clarity of communication from staff to clients. Alongside this, we are exploring how AI can be used safely to streamline internal processes. From a research perspective, AI can produce misleading or inaccurate information, particularly if prompts are not carefully constructed. We’ve already seen instances where even basic, readily available information has been incorrect, meaning all outputs must be carefully reviewed. Staff are restricted to using specialist reference providers such as Croner and Tolley, whose AI tools draw only from their own verified databases. This helps minimise inaccuracies and ensures that outputs are supported by traceable source material. We are also investing in staff training focused on AI best practice. To mitigate data security risks, only Microsoft Copilot is authorised for use, as it includes built-in protections to prevent sensitive data from being shared with open AI platforms. Staff training and company-wide policies governing the use of AI are key to using it safely. Caroline Roberts Journalist.