AI versus the personal tax adviser
AI technology is impressive, and if used in the right way it can excel, freeing up time for professionals to focus on more strategic tasks. But how does it fare as a replacement for a personal tax adviser?
Artificial intelligence (AI) has rapidly become a cornerstone of modern innovation. From composing music to drafting legal documents, large language models (LLMs) like Chat GPT and Copilot now offer fast, intelligent responses to a wide range of questions, including those related to tax. The temptation is obvious: why pay for a professional when you can get a quick, seemingly authoritative answer for free?
The reality is that this convenience comes with serious risks. In private client tax planning, clients may have clear goals and trust that efficient technology will help them make good progress. But if no one is checking for unexpected obstacles or challenging assumptions, there’s a risk of amplifying small problems into much bigger ones.
Hallucinations: Confidently wrong
LLMs are trained to predict the most likely next word or phrase based on vast datasets. They do not know facts in the human sense. When faced with a question outside their training data or with ambiguous input, they can “hallucinate” and generate plausible-sounding but entirely false information. OpenAI, the creator of ChatGPT, has admitted that such hallucinations are mathematically inevitable, not just engineering flaws.
In tax, where a single wrong answer can mean penalties, increased fees or even prosecution, this is a potential recipe for disaster. There are already documented cases of AI-generated tax advice being not just unhelpful, but dangerously wrong. In one report, AI-powered tax chatbots from major providers gave incorrect answers to complex questions up to half of the time. The errors ranged from outdated regulations to fabrications, mistakes that could cost users dearly.
The risks are exacerbated by the reliance of general-purpose AI on unverified public sources. In contrast, professional firms such as S&W use ring-fenced or internal analysis frameworks allowing AI to operate within a curated ecosystem of approved, signed-off insights. This means the outputs are grounded in expert-reviewed content, reducing the likelihood of error and enhancing confidence in the results.
Rather than replacing human judgment, AI in this context can augment professional expertise, making it poles apart from the approach taken by many private clients using general purpose LLMs. You can see how S&W has embraced AI in its digital compliance programme.
The echo chamber and sycophancy of an AI personal tax adviser
Another subtler danger is the echo chamber effect. LLMs are designed to be helpful and agreeable. Their responses are shaped by reinforcement learning with human feedback, which often rewards answers that make users feel good or heard. This can lead to a kind of digital sycophancy, where the AI flatters or affirms the user’s ideas, regardless of their accuracy.
The case of Alan Brooks, a Canadian business owner, is a cautionary tale. Brooks spent weeks in conversation with ChatGPT, which, through endless praise and encouragement, convinced him he had discovered a world-changing mathematical theory. The AI never challenged his ideas. Instead, it reinforced his delusions, leading him down a rabbit hole that ultimately harmed his mental health and personal life. Only when another AI contradicted ChatGPT did Brooks realise he had been misled.
This tendency to affirm, rather than challenge, is especially dangerous in tax, where the right answer often depends on asking the right questions and challenging assumptions. An AI that simply echoes beliefs, or fails to probe for missing information, can lead users astray.
To deal with these issues in house, S&W takes an approach which prioritises contextual awareness and expert oversight:
Our AI tools are designed to interrogate assumptions, flag inconsistencies and guide our users toward more informed decisions. This is all within a framework of validated data and professional governance which looks very different from the output of many of the publicly available AI options.
Garbage in, garbage out
AI models can only work with the information they are given. If you don’t provide the right prompt, or don’t know which details are relevant, the AI cannot ask clarifying questions the way a human professional would. It may confidently generate an answer based on incomplete or misunderstood facts, and an error may not be realised until it’s too late.
A case in point: a non-UK domiciled client had been living in the UK prior to 5 April 2025. We had discussed the potential changes on the horizon, but shortly before we were due to meet to review his position, he cancelled our meeting, having sought advice elsewhere and feeling confident in his decision. When we next spoke, he explained that, having understood that as a long-term resident (LTR) his worldwide estate would be within the scope of UK IHT, he had decided to leave the UK and resettle elsewhere before the changes took effect.
He wasn’t happy about it, but believed it was his only option based on the advice he’d received. When asked who the adviser was, he revealed it was a widely used AI chat platform. What he hadn’t mentioned to the ‘adviser’, and what it never prompted him to disclose, was that as an Indian-domiciled individual, he actually benefitted from the protections of an estate duty treaty. This allowed LTRs in certain situations to remain exempt from UK IHT on non-UK assets.
The benefits and the future
None of this is to say that AI has no place in tax, quite the opposite.
LLMs can be powerful tools for research, summarising regulations, automating routine tasks or enhancing productivity. Our digital team is helping many of our clients deploy these tools in ways that are safe, efficient and aligned with professional standards. Specialised, domain-trained models with human oversight are already showing promise in reducing errors and improving productivity.
For now, however, publicly available online LLM tools should not be seen as a substitute for professional judgement. Tax law is notoriously complex and context dependent. Human advisers are trained to spot red flags, ask probing questions and interpret nuance.
AI, when used within a framework of governance, validation and professional oversight, can be a powerful ally. Without such checks and balances, the critical judgement angle will be missing and the tool will not be able to identify reliably when it is out of its depth. While AI tools, like Copilot and Chat GPT, can offer a helpful starting point, individuals seeking tax advice should never treat them as a final answer. Decisions with financial or legal consequences demand the insight of a qualified professional.
The future of tax advice will not, however, be about choosing between humans and machines. Instead, it will be about relying on professionals who have invested in technologies and are trained in the ability to ask the right questions and prompts, cross reference the answers, interrogate the systems and combine all of this with the, as yet, matchless ability of human advisers to sign off on the advice.
Please note: This article has been written by humans and not AI.
Tax legislation is that prevailing at the time, is subject to change without notice and depends on individual circumstances. You should always seek appropriate tax advice before making decisions. HMRC Tax Year 2025/26.
By necessity, this briefing can only provide a short overview and it is essential to seek professional advice before applying the contents of this article. This briefing does not constitute advice nor a recommendation relating to the acquisition or disposal of investments. No responsibility can be taken for any loss arising from action taken or refrained from on the basis of this publication. Details correct at time of writing.
Approval code: NTEH7102551