🤦♂️
And whether training models is fair use.
I really feel like there needs to be a mandatory business ethics course in college where what they teach, basically, “if it isn’t ethical, don’t do it.”
If it lets people get into mindfulness, however adulterated you might describe it, isn’t that still a good thing?
That’s kind of the risk with any technology. And I admit, it is the most likely way we lose control: someone will ask, “why does Apple let you turn off the child porn filter?” and the answers may not be enough for lawmakers or an angry mob.
That the same could be said of a great many tools that filter bad content, from spam filtering to DDOS filtering. Should a technology not be available to consumers based on a hypothetical? That’s just as bad.
If a technology exists to filter content I don’t want to see, who are you to tell me Apple shouldn’t sell me a device with that technology I want?
Yes, there is potential for a slippery slope. And any filtering technology could be used for nefarious purposes. But this strikes me as pretty far from the slope and the purpose is clearly a good one. Remember you can always just turn it off.
I agree. For normies sick of online harassment, these filters are a huge win. Also for parents.
I don’t believe there’s any actual data collection?
I think if you’re using Keepass/Strongbox, and using e2e iCloud encryption, that’s good enough for most users.
Just have a backup somewhere.