
Copyright law in the UK prevents the training of artificial intelligence models from taking place. However, Dr Andres Guadamuz, a senior lecturer in intellectual property law at the University of Sussex, says creatives should be lobbying for the regulations to be loosened if they want to receive fair compensation.
“UK rightsholders’ works are already being used for training abroad, and at present there is very little they can do other than pursue infringement claims in those jurisdictions,” he told TechRepublic in an email. “Because of the current copyright rules, no commercial training is taking place in the UK, which actually makes it harder for authors to seek compensation.”
AI giants including OpenAI and Google have been lobbying the government for a full text and data mining exception, whereby they could use any material by a UK creative or first published in the country for AI training without permission. They are also opposed to allowing creatives to opt out of having their works scraped, arguing that it complicates the identification of legally usable content.
Artists, on the other hand, think that giving AI developers the right to use their content by default could erode their ability to control and profit from their creations, as companies would not need to negotiate with them for initial access.
Realistically, AI does not operate within jurisdictional borders, and many models can be accessed globally; therefore, even if UK law did demand developers seek rightsholder permission before training their models, they could be undermined by laws elsewhere that don’t. This is why Dr Guadamuz says we need to join them, rather than beat them.
“Most UK authors will not benefit from litigation agreements such as the Anthropic settlement, since those require a US copyright registration,” he told TechRepublic. He is referring to the $1.5 billion the AI company is paying out to US authors whose works were used without permission to train its Claude chatbot.
Another key case is between global stock image company Getty Images and the UK-based Stability AI, which the former alleges used its IP to train its models. While the case is yet to be decided, Stability trained its models in the US, meaning there was no infringement or impact on Getty’s chances of success.
Dr Guadamuz told TechRepublic: “Ironically, the best way forward for UK copyright holders is to support a change in domestic law that would encourage training to take place here. This would open the door to compensation agreements while also giving them the ability to opt out of training.”
Indeed, despite the outcome of the Anthropic trial, the judge did rule that use of copyrighted material was lawful, but the way the firm got hold of it (by downloading pirated copies) was not. A different judge made a similar ruling in a lawsuit against Meta, indicating that this is where the default legal position is heading.
But, by supporting a legal framework that allows AI training on their works, UK copyright holders would gain the ability to formally opt out and strengthen their position to claim compensation if their wishes are ignored. “At the moment, their works are being used without any practical recourse, copyright is strictly national after all,” said Dr Guadamuz.
The EU has an opt-out regime in place through its Digital Single Market Directive, and Dr Guadamuz says there have been no repercussions on the creative industries there. Japan allows copyrighted works to be used for AI training, provided it does not “unreasonably prejudice the interests of the copyright owner,” making it an attractive location for AI research, which also maintains a healthy creative sector, according to the legal expert.
On the other hand, if copyright law is tightened, there is also the risk that AI models will simply be trained outside the UK, likely still on content from UK artists. The potential negative impact on the British tech industry is not unknown to UK politicians, as domestic firms would be unable to compete if their training potential is limited compared to that of the US and China.
In June, Parliament passed a bill allowing AI models to be trained on copyrighted material without the rights holders’ knowledge. Members of the House of Commons argued that requiring transparency would discourage companies from developing and releasing AI products in the UK, claiming disclosure requirements would create undue burdens and expose proprietary data sources.
The same fears of an economic blow stemming from AI regulations are reflected over the pond, with US President Donald Trump calling it “impractical” to seek permission from every artist for the scraping of their work.
Companies like Microsoft and Cloudflare are looking to put the power back in creators’ hands by offering solutions that allow them to sell their content on a per-use basis.
Source of Article