AI chatbot invented legal cases in taxpayer’s failed appeal against HMRC

A litigant in person has been criticised by a judge for relying on an AI chatbot that fabricated three legal precedents in a tax dispute.
Marc Gunnarsson appealed after HM Revenue and Customs (HMRC) sought to reclaim £12,918 in coronavirus self-employment support payments he had received. Representing himself at the Upper Tribunal, he used AI to help draft his submissions, but the software “hallucinated” fictitious tribunal decisions.
HMRC spotted the fabricated cases in his skeleton argument, filed the day before the hearing.
Judge Rupert Jones said: “The accuracy of AI should not be relied upon without checking, particularly when it comes to statements or arguments that it makes concerning the law. There is a danger that unarguable submissions or inaccurate or even fictitious information or references may be generated.”
The judge added that Mr Gunnarsson was not “highly culpable” because he was untrained in law and “may not have understood that the information and submissions presented were not simply unreliable but fictitious”. He warned, however, that “in the appropriate case, the Upper Tribunal may take such matters very seriously.”
Mr Gunnarsson argued he had believed himself eligible for the Self-employment Income Support Scheme, though he was receiving employment income as a company director. The First Tier Tribunal initially upheld his claim, but HMRC successfully appealed, with the Upper Tribunal ruling the payments recoverable.
The case comes amid growing concern over the use of AI in legal proceedings. Earlier this year, junior barrister Sarah Forey was accused of citing fictitious cases while representing a homeless man at the High Court. The judge said it was “improper”, “unreasonable” and “negligent” to present fabricated cases, ordering her and the instructing solicitors to pay wasted costs.
In a separate case, Bodrul Zzaman v The Commissioners for HMRC, a father appealing a £2,500 child benefit charge used AI to build his defence. The tribunal dismissed his arguments as irrelevant, with the judge noting the case “highlights the dangers of reliance on AI tools without human checks to confirm assertions the tool is generating are accurate”.