According to CNET, Justice Joanna Smith ruled on Tuesday that Stability AI did not violate copyright law when training its Stable Diffusion image models using Getty Images’ content. The UK court determined that Stability AI doesn’t “store or reproduce any Copyright Works” in its training process, giving the AI company a significant victory on the core copyright question. However, Getty Images succeeded in part of its trademark claims, with the court finding that Stability AI violated trademark protections when its users created images resembling Getty’s and iStock’s logos. Both companies immediately claimed victory in their statements, with Getty calling it a win for intellectual property owners and Stability AI emphasizing that the ruling “resolves the copyright concerns that were the core issue.” Justice Smith described her findings as both “historic” and “extremely limited” in scope, reflecting the broader uncertainty courts face with AI copyright questions.
Why everyone thinks they won
Here’s the thing about this ruling – it’s basically a Rorschach test for how you view AI training. Getty gets to point to the trademark infringement finding and say “See? We told you they crossed the line.” Stability AI gets to highlight that the court threw out the main copyright claims and say “The core issue is resolved in our favor.” Both companies are technically correct, which is exactly what makes this so messy for future cases.
Look, Getty dropped its primary copyright claims earlier this year, leaving only secondary claims for the court to consider. That’s a pretty important detail that Stability AI’s general counsel Christian Dowell made sure to emphasize. So while Getty can claim a partial victory on trademarks, they essentially conceded the bigger copyright battle before it even reached this point. That tells you something about how confident they were in their main argument.
What this means for AI training
The court’s reasoning that Stability AI doesn’t “store or reproduce” copyrighted works gets to the heart of how these models actually work. These systems don’t keep copies of training images – they learn patterns and relationships from them. But is that distinction meaningful when the resulting model can recreate styles and compositions that are clearly derived from specific sources?
We’re seeing similar patterns in US courts too. Anthropic and Meta have mostly won their cases against authors claiming copyright infringement. There’s a growing consensus that the current copyright framework wasn’t built for this technology. The four-part test US courts use for copyright cases just doesn’t map cleanly onto how AI models learn from data.
The bigger picture
Every one of these rulings is building the precedent for how we’ll regulate AI training for years to come. Justice Smith was careful to say her ruling is specific to this case’s evidence and arguments. That means another case with slightly different facts could easily go the other way. We’re basically watching the legal system build the airplane while it’s flying.
For creators worried about their work being used without permission, this ruling offers mixed signals. On one hand, the copyright claims failed. On the other, the trademark success suggests there are boundaries around how AI outputs can resemble protected material. The full court ruling and Getty’s statement show just how carefully both sides are positioning themselves for the next legal battles.
So where does this leave us? Basically in the same uncertain place we started, but with a few more data points. The courts are clearly struggling to apply decades-old copyright law to technology that works in fundamentally different ways than anything that came before. And honestly, can you blame them? We’re all figuring this out as we go.
