Is AI breaking copyright?

Two big AI developments happened last week. Here's our take.

Hi Non-Techies,

Is New York really a concrete jungle where dreams are made of, or was Alicia Keys lying to us all? I’ll find out soon, because I’m heading there this week for a holiday.

Consider this your advanced warning that next week’s newsletter is likely to feature at least one picture of a cronut, pizza, or bagel. Possibly all three.

But before I take a bite from the Big Apple, I’ve got a few big bits of AI news to share and discuss. It’s no exaggeration to say that the fuzzy future of AI got a tiny bit clearer last week.

In 2024, three authors (Andrea Bartz, Charles Graeber and Kirk Wallace Johnson) accused Anthropic of stealing their work to train their AI tool, Claude.

Last week, a U.S. judge ruled that using books to train AI tools does not violate U.S. copyright law.

This is major enough that I’d salute it if it walked into the room. Sure, it isn’t the sort of bombshell that’s going to immediately impact your week, but it’s big for the future of AI and how it’s trained.

Context: The AI industry in the U.S. is a bit of a magnet for lawsuits. In the last month alone, Disney and Universal have filed a lawsuit against image-generation tool Midjourney, and the BBC is considering legal action against AI search engine Perplexity.

Regardless of your moral stance on this, the fact that an actual judge has weighed in on the side of AI here might be a sign of things to come for other, similar lawsuits. Here’s what the judge said:

“Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works, not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”

He hasn’t dismissed the case, though, which means it’ll still go to court. AI for Non-Techies will have its nose pressed firmly against the glass for that one.

“The biggest risk is doing nothing”.

Back to the UK’s seagull-patrolled shores. The UK government has published a report about how the use of AI in schools can impact education, based on 21 early adopters.

Whilst the report itself is pretty unexplosive (the overarching conclusion is essentially, “we need to do more research, but it’s worth exploring”), it’s an encouraging sign that the UK Government is open to exploring the potential impact of AI in education. As they say in the intro:

“The UK government is ambitious for AI and views it as a fundamental part of its mission to break down barriers to opportunity for children and young people.”

But the report also says that stuff like hallucinations and potential safeguarding issues give rise to “an urgent need to assess whether intended benefits outweigh any potential risks”.

AI champions.

There was one particular part of the UK government’s report that really caught my eye:

Most schools and colleges had an AI champion who was instrumental in getting senior leaders to embrace AI and bringing staff on board…AI champions typically created a ‘buzz’ around AI and played a vital role in demystifying it so that staff began to understand what it was and how they could use it.

If I could summarise my aim with the AI Academy, it’s to create AI champions everywhere. If you want to be the person who can make AI more accessible for everyone around you, click here to learn more.

See you next week for pizza pics (and maybe some stuff about AI, if time allows).

Heather

Did you enjoy this email?

I'm using the tried-and-tested pizza scale today.

Login or Subscribe to participate in polls.

Reply

or to participate.