Right now, robots.txt on lemmy.ca is configured this way
User-Agent: *
Disallow: /login
Disallow: /login_reset
Disallow: /settings
Disallow: /create_community
Disallow: /create_post
Disallow: /create_private_message
Disallow: /inbox
Disallow: /setup
Disallow: /admin
Disallow: /password_change
Disallow: /search/
Disallow: /modlog
Would it be a good idea privacy-wise to deny GPTBot from scrapping content from the server?
User-agent: GPTBot
Disallow: /
Thanks!
Yes, please.
We can’t stop LLM developers from scraping our conversations if they’re determined to do so, but we can at least make our wishes clear. If they respect our wishes, then great. If they don’t, then they’ll be unable to plead ignorance, and our signpost in the road (along with those from other instances) might influence legislation as it’s drafted in the coming years.
I’m on board for this, but I feel obliged to point out that it’s basically symbolic and won’t mean anything. Since all the data is federated out, they have a plethora of places to harvest it from - or more likely just run their own activitypub harvester.
I’ve thrown a block into nginx so I don’t need to muck with robots.txt inside the lemmy-ui container.
# curl -H 'User-agent: GPTBot' https://lemmy.ca/ -i HTTP/2 403
Yes, please prevent them from using our conversations.