Would search engines only be allowed to show search results for sources that had opted in? They “train” their search engine on public data too, after all.
They aren’t reselling their information, they’re linking you to the source which then the website decides what to do with your traffic. Which they usually want your traffic, that’s the point of a public site.
That’s like trying to say it’s bad to point to where a book store is so someone can buy from it. Whereas the LLM is stealing from that bookstore and selling it to you in a back alley.
So does any site that quotes the book. Just being trained on a work doesn’t give the model the ability to cite it word for word. For most of the books in this set you wouldn’t even be able to get a single accurate quote out of most models. The models gain the ability to cite passages from training on other sources citing these same passages.
It shares popular quotes from books, it can’t reproduce arbitrary content from a book. The content needs to be heavily duplicated in the training data to stick around (e.g. from book reviews), and even than half of it might still end up being made up on the spot.
Also request for copyrighted content will be blocked by ChatGPT and just receive the stock “I can’d do that” response anyway.
If you have some damning examples that show the opposite, show them.
Being blocked by ChatGPT just means that the interaction layer you see doesn’t show the output, not that the output wasn’t generated.
Everything you see that’s public facing and interfacing with an AI is an extreme filtering layer for what is output. There’s tons of checks that happen to ensure that they don’t output illegal content or any of a million other undesirable things.
I’m too lazy and care too little but you can basically get it to roleplay as a book expert or something and to “remind” you of certain passages. It gets around the filter pretty easily, that’s how jailbreaks work.
First: There are mechanisms to opt out (robots.txt and meta noindex)
Second: There is some foreknowledge on the part of the web author. Even in the early days of the web — before you could’ve predicted the concept of search engines — in order to distribute anything you had to understand the basics of hypermedia, among which is the idea that anything can link to anything else and clients can be users or machines alike.
Third: Even though you are correct that search engines are tokenizing text and doing statistical analysis to recombine the tokens into novel forms in order to rank against queries, those novel forms are never presented to the user. Only direct quotes. So a user never gets a false reference to the supposed content of a page (unless the page itself lies to crawler requests).
Fourth: All of the technical points above are pretty much meaningless, because we are social creatures and our norms don’t stem from a mechanical flow chart divorced from real-world context.
Creators are generally okay with their content being copied into search DBs, because they know it’s going to lead to users finding the true author of those words, which will advance their creative pursuits either through collaboration or monetary support.
Creators are complaining about content being copied into LLMs, because their work will be presented out of context, often cited incorrectly, keep people away from the author of those words, and undermine the lifeblood of their creative pursuits – be it attracting new collaborators or making sales.
Whether it technically counts as IP infringement or not under current law? Who really cares? Current IP law is a fucking scam, designed to bully creators out of their own creations and assign full control to holding companies who see culture as nothing more than a financial instrument to be optimized. We desperately need to change IP law anyway – something that I think even many strident “AI” supporters agree with – so using it as a justification for the ethics of LLMs reveals just how weak the group’s position truly is.
LLM vendors see an opportunity for profit, if they can get away with it. They are offering consumers a utopian vision of infinite access to content while creating an IP chokepoint that they can enshittify once it blows past critical mass. It’s the same tactics the social media companies used 15 years ago, and it weighs heavy on my heart that so many Lemmy users are falling for it once again while the lesson is still so fresh.
Would search engines only be allowed to show search results for sources that had opted in? They “train” their search engine on public data too, after all.
They aren’t reselling their information, they’re linking you to the source which then the website decides what to do with your traffic. Which they usually want your traffic, that’s the point of a public site.
That’s like trying to say it’s bad to point to where a book store is so someone can buy from it. Whereas the LLM is stealing from that bookstore and selling it to you in a back alley.
AI isn’t either. It’s selling statistical data about the books.
It literally shares passages verbatim
So does any site that quotes the book. Just being trained on a work doesn’t give the model the ability to cite it word for word. For most of the books in this set you wouldn’t even be able to get a single accurate quote out of most models. The models gain the ability to cite passages from training on other sources citing these same passages.
It shares popular quotes from books, it can’t reproduce arbitrary content from a book. The content needs to be heavily duplicated in the training data to stick around (e.g. from book reviews), and even than half of it might still end up being made up on the spot.
Also request for copyrighted content will be blocked by ChatGPT and just receive the stock “I can’d do that” response anyway.
If you have some damning examples that show the opposite, show them.
Being blocked by ChatGPT just means that the interaction layer you see doesn’t show the output, not that the output wasn’t generated.
Everything you see that’s public facing and interfacing with an AI is an extreme filtering layer for what is output. There’s tons of checks that happen to ensure that they don’t output illegal content or any of a million other undesirable things.
I’m too lazy and care too little but you can basically get it to roleplay as a book expert or something and to “remind” you of certain passages. It gets around the filter pretty easily, that’s how jailbreaks work.
That’s maybe an issue. I mirror speech a lot, though. How large are the passages?
That claim is disingenuous at best, and misinformed otherwise.
First: There are mechanisms to opt out (robots.txt and meta noindex)
Second: There is some foreknowledge on the part of the web author. Even in the early days of the web — before you could’ve predicted the concept of search engines — in order to distribute anything you had to understand the basics of hypermedia, among which is the idea that anything can link to anything else and clients can be users or machines alike.
Third: Even though you are correct that search engines are tokenizing text and doing statistical analysis to recombine the tokens into novel forms in order to rank against queries, those novel forms are never presented to the user. Only direct quotes. So a user never gets a false reference to the supposed content of a page (unless the page itself lies to crawler requests).
Fourth: All of the technical points above are pretty much meaningless, because we are social creatures and our norms don’t stem from a mechanical flow chart divorced from real-world context.
Creators are generally okay with their content being copied into search DBs, because they know it’s going to lead to users finding the true author of those words, which will advance their creative pursuits either through collaboration or monetary support.
Creators are complaining about content being copied into LLMs, because their work will be presented out of context, often cited incorrectly, keep people away from the author of those words, and undermine the lifeblood of their creative pursuits – be it attracting new collaborators or making sales.
Whether it technically counts as IP infringement or not under current law? Who really cares? Current IP law is a fucking scam, designed to bully creators out of their own creations and assign full control to holding companies who see culture as nothing more than a financial instrument to be optimized. We desperately need to change IP law anyway – something that I think even many strident “AI” supporters agree with – so using it as a justification for the ethics of LLMs reveals just how weak the group’s position truly is.
LLM vendors see an opportunity for profit, if they can get away with it. They are offering consumers a utopian vision of infinite access to content while creating an IP chokepoint that they can enshittify once it blows past critical mass. It’s the same tactics the social media companies used 15 years ago, and it weighs heavy on my heart that so many Lemmy users are falling for it once again while the lesson is still so fresh.