In November 2025, the English High Court delivered judgment in Getty Images (US) Inc & Others v Stability AI Ltd EWHC 2863 (Ch). This is the first UK decision to directly address whether generative AI models constitute infringing copies under English copyright law when trained on copyrighted material.
What happened?
Getty Images, one of the world's largest stock photo agencies, sued Stability AI, the company behind Stable Diffusion (an open source generative AI tool that creates images from text descriptions). Getty claimed that Stability used millions of Getty’s copyrighted photographs without permission to train the AI model. The images were sourced from a publicly available database called LAION-5B.
Getty argued that this amounted to copyright infringement and that the AI model itself was an infringing copy under the Copyright, Designs and Patents Act 1988 (“CPDA”). In short, Getty’s copyright claim failed, though it succeeded on limited trademark grounds. While the decision is not binding in Ireland, it offers guidance on how creators, developers, and rights holders should approach AI and intellectual property, particularly given the similarities between English and Irish copyright law.
Key Claims and Findings
Trademark Infringement
Getty brought a claim under sections 10(1), 10(2), and 10(3) of the Trademarks Act 1994 (“TMA”), alleging that Stability had infringed the Getty Images and iStock marks (iStock being a stock photo service owned by Getty images). The allegation was that some Stable Diffusion outputs contained watermark-like features resembling Getty's registered marks, suggesting source confusion or reputational harm.
Mrs Justice Smith conducted a meticulous analysis under each provision of the TMA:
- Under section 10(1) (double identity), the Court found very limited infringement. This provision requires identical use of an identical mark. Only a limited number of iStock watermarks were found to be infringing.
- Section 10(2) (likelihood of confusion): Limited infringement was found to have taken place but there was no evidence of infringement for later Stability models (SD XL and v1.6) because there was no proof that any UK user actually generated watermarks using these versions.
- Section 10(3) (dilution/tarnishment): The claim failed entirely because the Court found no sufficient basis to deduce a change in economic behaviour.
Secondary Copyright Infringement
Getty also claimed secondary copyright infringement under sections 22, 23, and 27(3) of the CDPA. Unlike primary infringement (which is the initial act of making an illegal copy), secondary infringement concerns dealing with items that are already infringing copies such as importing or distributing them. Getty argued that Stability’s models constituted infringing copies because they were allegedly created using millions of Getty's copyrighted images without consent and making those models available for download in the UK via Hugging Face (an open-source platform and company that serves as a community hub for artificial intelligence and machine learning, providing a place to find and share models, datasets, and applications) which amounted to importation of infringing copies.
Stability countered that the training and development of Stable Diffusion occurred outside the UK, so UK copyright law did not apply. More importantly, they argued that the models do not actually store Getty's images. Instead, they contain “model weights” which can be described as mathematical patterns or statistical summaries learned from studying millions of images. The AI learns patterns and techniques from the training images without storing the actual images themselves.
The Court agreed largely with Stability in that its model weights are merely statistical representations that do not contain or reproduce Getty's images. This ruling clarified that AI models trained on copyrighted materials are not, by themselves, infringing copies, at least under current English copyright law.
Legal Significance
The Court held that digital files, including intangible model weights stored electronically, can constitute relevant “articles” for the purposes of secondary infringement provisions under English copyright law, applying an “always speaking” construction approach meaning the law should be interpreted flexibly to keep pace with modern technology, even though it was written before digital files existed.
However, Stable Diffusion was found not to be an infringing copy because the model weights do not store or reproduce any of Getty's copyright works. They are statistical parameters derived from training data, not copies of the training images themselves. This finding was decisive in dismissing the secondary infringement claim.
Mrs Justice Joanna Smith stated herself that the findings are both “historic and extremely limited in scope”. The judgment underscores a judicial preference for technical, evidence-based reasoning over policy activism, leaving broader questions such as whether AI training itself constitutes copying unresolved. The Court decided the specific case before it based on the evidence, rather than making sweeping pronouncements about AI and copyright generally.
Practical Impact
- For AI Developers: Because the training occurred on servers outside the UK, the Court proceeded on the basis that primary acts of reproduction occurred abroad and were not actionable under English copyright law. The secondary infringement claim failed because the models do not store or reproduce copyright works. This finding was based on the specific evidence before the Court and does not constitute a general endorsement of foreign training as a “safe harbour”. Developers should note that other jurisdictions may take different approaches, and the question of whether training itself constitutes primary infringement remains unresolved.
- However, developers face ongoing trademark and reputational risks when AI outputs reproduce branding or watermarks. To mitigate these risks, developers should conduct dataset due diligence to avoid inclusion of protected marks and maintain clear documentation of data sources and training locations.
- For Rights Holders: This decision suggests that establishing copyright infringement may be difficult when AI models are trained outside the jurisdiction where the claim is brought. Proving infringement will require substantial technical evidence tracing how copyrighted works were used within an AI model. Rights holders should therefore review licensing agreements to specify whether their works may be used for AI training and consider licensing terms that limit or condition such use.
- For Brand Owners: Although Getty's trademark claims largely failed, the limited findings of infringement demonstrate that brand misuse in AI outputs remains a legal risk. Brand owners should monitor generative AI outputs for unauthorised use of logos or watermarks and be prepared to take prompt enforcement action where such marks appear in generated content.
Conclusion
The Getty Images v Stability AI decision provides important clarification on specific technical questions about AI models and intellectual property under English law, though its broader impact remains uncertain. The Court's finding that AI models do not store or reproduce original works and therefore are not infringing copies narrows the scope for secondary copyright infringement claims, whilst its limited recognition of trademark infringement for specific outputs underscores the need for evidence of real-world harm or confusion. However, the Court did not decide whether training itself constitutes primary copyright infringement, as Getty abandoned this claim after acknowledging training occurred outside the UK. The judgment answers narrow technical questions whilst leaving most of the broader legal issues around AI training and outputs unresolved.
A copy of the judgment can be found here.