I’m conflicted on a lot of this. At the end of the day it seems like these LLMs are simulating human behavior to an extent - exposure to content and generating similar content from that. Could Sarah Silverman be sued by comedians who influenced her comedy style and routines? generally no. I do understand the risk with letting these ‘AI’ run rampant to displace a huge portion of the creative space which is bad but where should the line be drawn? Is it only the fact they were trained material they dont own people are challenging? What recourse will they have when a LLM is trained on wholly owned IP?
She’s suing for copyright infringement, basically, not the LLM emulating her style.
The LLMs read books from her and many, many others that they didn’t buy, because unauthorized copies had been uploaded to the web (happens to every popular book).
Honestly, I don’t know if she has a case. Going after the people who illegally uploaded her book would be the proper route, but that’s always nearly impossible.
Long and short, LLMs benefited from illegal copies.
If you upload an illegal copy of a book and I download it, not realizing or caring that it’s pirated, and then I re-upload it elsewhere, you and I have both committed copyright infringement. This feels like the same thing.
I suspect the case will depend largely on whether the ways that the models were trained using her works qualify as fair use.
Your example is faulty. If you upload an illegal copy of a book and I read it then tell people all about it, I am not committing copyright infringement
Did you access it where it was illegally posted online?
And in so doing, copy it locally in order to read it?
Guess what? According to copyright laws in the US, you just committed copyright infringement.
There’s two separate claims.
One, that training is infringement, will hopefully be found to be without merit or it’s a slippery slope to the death of free use.
The other, that OpenAI committed copyright infringement by downloading pirated books, is not special in any way with the AI stuff. It doesn’t matter how they used it. If they can be found to have downloaded it - even if they then never even opened the file - they are liable for civil damages that can be as high as $150,000 per work if they knew in advance that they were pirating it, and not less than $100 per work no matter if they knew or not.
This is the result of years of lobbying by the various digital rights owners over the past few decades. It’s a very broad scope of law and OpenAI should rightfully be concerned if they didn’t actually purchase the copyrighted material they used to train.
You can learn and share the knowledge from a book I might illegally upload, but if you are caught having made a copy of the pirated textbook I uploaded, you are liable for damages completely separate from what you did with the knowledge from the books.
If you use that illegal copy to create a work, then your copy infringes copyright (unless it falls under fair use). LLMs don’t count as people in any legal sense, and training them doesn’t have a legal status comparable to a real person reading books.
I see a lot of people claim the training model included copyrighted works particularly books because it can provide a summary of it. But it can provide a summary of visual media too, and no one is claiming it’s sitting there watching films.
If the argument is it has quite a detailed knowledge of the book, that’s not convincing either. All it needs is a summary and it can make up the blanks, and get it close enough we can’t tell the difference. Nothing is original.
I’m conflicted on a lot of this. At the end of the day it seems like these LLMs are simulating human behavior to an extent - exposure to content and generating similar content from that. Could Sarah Silverman be sued by comedians who influenced her comedy style and routines? generally no. I do understand the risk with letting these ‘AI’ run rampant to displace a huge portion of the creative space which is bad but where should the line be drawn? Is it only the fact they were trained material they dont own people are challenging? What recourse will they have when a LLM is trained on wholly owned IP?
She’s suing for copyright infringement, basically, not the LLM emulating her style.
The LLMs read books from her and many, many others that they didn’t buy, because unauthorized copies had been uploaded to the web (happens to every popular book).
Honestly, I don’t know if she has a case. Going after the people who illegally uploaded her book would be the proper route, but that’s always nearly impossible.
Long and short, LLMs benefited from illegal copies.
If you upload an illegal copy of a book and I download it, not realizing or caring that it’s pirated, and then I re-upload it elsewhere, you and I have both committed copyright infringement. This feels like the same thing.
I suspect the case will depend largely on whether the ways that the models were trained using her works qualify as fair use.
Your example is faulty. If you upload an illegal copy of a book and I read it then tell people all about it, I am not committing copyright infringement
How did you read it?
Did you access it where it was illegally posted online?
And in so doing, copy it locally in order to read it?
Guess what? According to copyright laws in the US, you just committed copyright infringement.
There’s two separate claims.
One, that training is infringement, will hopefully be found to be without merit or it’s a slippery slope to the death of free use.
The other, that OpenAI committed copyright infringement by downloading pirated books, is not special in any way with the AI stuff. It doesn’t matter how they used it. If they can be found to have downloaded it - even if they then never even opened the file - they are liable for civil damages that can be as high as $150,000 per work if they knew in advance that they were pirating it, and not less than $100 per work no matter if they knew or not.
This is the result of years of lobbying by the various digital rights owners over the past few decades. It’s a very broad scope of law and OpenAI should rightfully be concerned if they didn’t actually purchase the copyrighted material they used to train.
You can learn and share the knowledge from a book I might illegally upload, but if you are caught having made a copy of the pirated textbook I uploaded, you are liable for damages completely separate from what you did with the knowledge from the books.
If you use that illegal copy to create a work, then your copy infringes copyright (unless it falls under fair use). LLMs don’t count as people in any legal sense, and training them doesn’t have a legal status comparable to a real person reading books.
I see a lot of people claim the training model included copyrighted works particularly books because it can provide a summary of it. But it can provide a summary of visual media too, and no one is claiming it’s sitting there watching films.
If the argument is it has quite a detailed knowledge of the book, that’s not convincing either. All it needs is a summary and it can make up the blanks, and get it close enough we can’t tell the difference. Nothing is original.