Corsino San Miguel: Getty Images v Stability AI – when copyright meets the cloud

Corsino San Miguel: Getty Images v Stability AI – when copyright meets the cloud

Dr Corsino San Miguel

Dr Corsino San Miguel delves into a landmark court ruling on generative AI and copyright law.

The Getty Images v Stability AI judgment, handed down on 4 November 2025, will be remembered not for what it decided, but for what it exposed. It marked the first full test of how the Copyright, Designs and Patents Act 1988 (CDPA) confronts machine learning – and revealed, perhaps inevitably, that the statute is straining under the weight of technology it never imagined.

At its simplest, the case asked whether the training and deployment of the generative-image model Stable Diffusion breached UK copyright and trade-mark law. Getty alleged that Stability AI had used millions of its licensed photographs, some of them bearing the familiar Getty watermark, to train the model without permission. What followed was a forensic dissection of law written for presses and servers in a world now run on cloud architecture.

From training to territory

Getty’s original claim was ambitious: it alleged primary and secondary copyright infringement, database-right infringement, trade-mark infringement and passing off. Yet as the evidence unfolded, that edifice narrowed dramatically. Disclosure showed that the training of Stable Diffusion took place on infrastructure outside the United Kingdom. Under the orthodox territorial principle of copyright, infringing acts must occur within the jurisdiction. No witness or record tied the alleged copying to a UK location. By mid-trial, Getty withdrew its primary claims, leaving the court to consider secondary infringement and trade-marks alone.

That shift matters. The CDPA was drafted for acts that happen somewhere – an identifiable act of copying, communication or distribution. Distributed computing and global data sets dissolve that geography. Where the Statute of Anne (1710) once drew a line around London’s printing presses, the CDPA now finds its lines running through data centres in Dublin, Virginia and Oregon.

The secondary infringement question

To succeed with its secondary copyright infringement claims, Getty needed to persuade the High Court of three things: first, that Stable Diffusion had been imported into the UK or otherwise possessed, sold, hired, or offered for sale by Stability AI; second, that the model qualified as both an “article” and an “infringing copy” within the meaning of the CDPA; and third, that Stability AI knew or had reason to believe it was dealing in such an infringing copy.

Mrs Justice Joanna Smith accepted only part of that construction. She agreed that an “article” under sections 22 and 23 need not be tangible – electronic copies stored in intangible form could, in principle, qualify. But she rejected Getty’s broader submission that the model weights themselves were “infringing copies” under section 27(3). That provision, she reasoned, requires an article to have at some point consisted of, contained, or stored a copy of a protected work. The Stable Diffusion weights, by contrast, did not hold or reproduce Getty’s images; they merely encoded statistical correlations derived from them.

In other words, what the model learned from the data did not amount to what the law recognises as a copy of it. The distinction is fundamental: under the CDPA, infringement arises from the act of reproduction, not from the extraction of relationships or patterns. The judgment thus reaffirmed a cornerstone of UK copyright law – that the act of copying, not the capacity or consequence of learning, defines infringement.

Trade marks and the ghost of the watermark

Getty also pursued trade-mark infringement under sections 10(1), 10(2) and 10(3) of the Trade Marks Act 1994, relying on examples of synthetic watermarks generated during litigation tests. The evidence, however, was sparse. For newer versions of Stable Diffusion ( XL and 1.6 ), no real-world examples were found. The court dismissed those claims entirely. For versions 1 and 2, it accepted that at least one synthetic watermark resembling the Getty or iStock mark might have appeared in the UK. Mrs Justice Smith identified only three instances of infringement, describing the findings as “extremely limited.” The claim under section 10(3) – alleging reputational harm – failed for lack of evidence.

The message was unmistakable: accidental watermark artefacts are a design issue, not a new frontier of trade-mark law. The judgment treated them as a product-safety matter for developers, not a licence to reinvent intellectual-property doctrine.

Judicial restraint, legislative gap

For some practitioners, the outcome felt restrained – a judgment that preserved doctrinal integrity while leaving the larger policy questions to Parliament. Constitutionally, that restraint was deliberate. The High Court refused to legislate by analogy, reaffirming that copyright’s reach is for Parliament to define, not for the judiciary to extend. In doing so, it quietly echoed Donaldson v Beckett – the eighteenth-century case born of a Scottish printer’s defiance – reminding us that the protection of creativity is a matter of policy, not of moral entitlement.

Yet the price of restraint is uncertainty. The CDPA was never designed for systems that learn rather than copy, and rights-holders are left with little recourse when training occurs abroad. The judgment “hands the quill back to Westminster.” The next move belongs to Parliament – not only to clarify the limits of copyright, but to confront a deeper challenge: how the law should protect the strategic knowledge that underpins creative and professional judgement itself. As I argued in a previous article, the real risk lies not only in the ownership of data, but in the gradual outsourcing of human reasoning to the systems trained upon it.

Policy pressure and international context

Attention has now turned to government. The UK’s preferred reform option, outlined in its AI and Copyright Consultation, would ease access to copyrighted works for model training – a stance that has met strong opposition from creative industries and peers in the House of Lords. Ministers resisted efforts to insert new AI-related protections into the Data (Use and Access) Bill but conceded to accelerate the timetable for review. Working groups are now examining transparency, rights control and licensing mechanisms, though progress is expected only toward year-end.

Complicating matters further are Britain’s international commitments – from the UK–US MoU on the Technology Prosperity Deal to the G7’s shared principles on trustworthy AI. These frameworks bind the UK to a pro-innovation stance that inevitably influences domestic lawmaking. The same digital networks that power creativity now blur the boundaries of national authority. Sovereignty in theory begins to meet dependency in practice.

Authorship in the machine age

At its heart, Getty v Stability AI poses a deceptively simple question: can a machine “learn” a work in the legal sense? And if it can, who owns what is learned? The judgment gives no definitive answer – and perhaps wisely so. Law, for now, is holding its ground while Parliament decides whether learning itself can be a lawful act.

Copyright began as a question of public access in the age of print. Three centuries later, it has become a question of interpretability in the age of algorithms. The challenge is no longer to stop copying, but to understand what “copying” means when knowledge becomes data. The courts have done their part. The next chapter must be written not in litigation, but in legislation.

Dr Corsino San Miguel is a member of the AI Research Group and the Public Sector AI Task Force at the Scottish Government Legal Directorate. The views expressed here are personal.

Join more than 16,400 legal professionals in receiving our FREE daily email newsletter
Share icon
Share this article: