Letter: AI will not save you

Letter: AI will not save you

Dear Editor,

I confess that it was with some considerable dismay that I read Dr Corsino San Miguel’s article in yesterday’s Scottish Legal News, The last candle in Old College, and I believe that it ought not to go unanswered. In my view, Dr San Miguel’s unmistakable enthusiasm for artificial intelligence has caused him to overlook very real risks.

There is a great deal to be said about generative AI in legal practice. However, as Dr San Miguel has done, I will largely confine my remarks to legal education. Nonetheless, as today’s law students are tomorrow’s practitioners, it will be necessary to make some comment on the connection between legal education and legal practice. I wish to make my position clear from the outset, however. I, along with many others in legal education, am yet to be convinced that generative AI has any legitimate place in our law schools.

It is not possible to discuss generative AI in a meaningful way without first understanding what it is. This raises an immediate issue, which is that the term “artificial intelligence” is itself a misnomer. An AI tool is not intelligent in any meaningful sense. It knows nothing. It understands nothing. It does not think. Instead, the large language model that lies at the heart of a generative AI tool is, as it has been defined in a popular science magazine, “a statistical model, or a mathematical representation of data, that is designed to make predictions about which words are likely to appear together” (E Gent, How does ChatGPT work and do AI-powered chatbots ‘think’ like us? New Scientist, 25 July 2023).

In other words, a generative AI tool is not programmed to find and deliver an answer to the question posed to it. Instead, it is programmed to deliver something that looks like an answer to the question posed. It predicts what words are most likely to be found together. We have all, I am sure, come across the concept of hallucinations – cases where the AI has invented facts. We should not think of these as cases where something has gone wrong. Instead, they are cases where the AI has done exactly what it has been programmed to do. Truth is entirely incidental to the way that generative AI tools work.

Many will assume that hallucinations will become less common as AI models improve, but that may be naïve. There is at least some evidence that the opposite is the case (J Hsu, AI hallucinations are getting worse – and they’re here to stay, New Scientist, 9 May 2025.)

All of this creates a problem, and not just for legal education. There is no shortage of cases of practitioners incurring the courts’ wrath by citing non-existent authorities, hallucinated by some AI tool. For legal educators, the concern is this: if ostensibly skilled and experienced practitioners can be so easily led astray by AI hallucinations, what hope do students have?

There is, however, an even more fundamental problem with the use of generative AI in legal education. It is hardly controversial to say that legal education ought to develop in students the ability to find and understand legal materials. Naturally, what this looks like will change over time. For example, we no longer require students to master the use of the legal citators that were in use when I was an undergraduate. Still, the fundamental aim remains the same. What use to a client is a lawyer who cannot interpret a case or understand a statute?

What kind of lawyer cannot engage reflectively with the literature of his or her profession? Where are students to learn those skills, if generative AI is always on hand to pre-digest the materials for them? That is a serious difficulty, even if the accuracy of generative AI can be relied on (which, as already discussed, it cannot.) If AI is to become more common in the profession, as seems likely, that is not an argument for students needing to know less. Rather, it is an argument for them to know more and to do more without AI, to give them some hope of assessing what AI tools deliver to them once they emerge into practice.

These risks are real and they are serious. If generative AI tools cannot be eliminated – and I do not suggest that they can – then we must at least not be naïve about these risks. In light of that, it is disappointing that Dr San Miguel speaks only in the most general terms, and says nothing about how the risks are to be managed.

It is imperative that we get this right. Nothing less is at stake than the idea of the law as a learned profession. I have no desire to see the legal profession deteriorate into mere crafters of prompts for generative AI, unable even to engage critically with whatever the AI produces. If we get this wrong, that is surely the result. We hear quite enough cheerleading about the benefits of AI. Let us talk more soberly about how we address the risks it poses.

I write, of course, in a personal capacity.

Dr Craig Anderson
Senior lecturer in law & LLB programme director
University of Stirling

Join more than 16,500 legal professionals in receiving our FREE daily email newsletter
Share icon
Share this article: