Exactly. Assuming this article means the American government when it says “government”, the First Amendment firmly protects entirely fictional accounts of child abuse, sexual or not. If it didn’t, Harry Potter would be banned or censored.
According to forum discussions seen by the IWF, offenders start with a basic source image generating model that is trained on billions and billions of tagged images, enabling them to carry out the basics of image generation. This is then fine-tuned with CSAM images to produce a smaller model using low-rank adaptation, which lowers the amount of compute needed to produce the images.
They’re talking about a Stable Diffusion LoRA trained on actual CSAM. What you described is possible too, it’s not what the article is pointing out though.
I can get “great” results trying to generate naked child with standard models for Stable Diffusion. They are not trained on abuse material. But they infer naked child on hentaï. Actually, it’s more of a problem. Most of the time I have to fight the generator not to generate sexy women. And generating sexy women you have sometimes to fight for them not looking too young.
That’s because the example they gave either a) combines two concepts the AI already understands, or b) adds a new concept to another already understood concept. It doesn’t need to specifically be trained on images of possums wearing top hats, but it would need to be trained on images of lots of different subjects wearing top hats. For SD the top hat and possum concepts may be covered by the base model datasets, but CSAM isn’t. Simply training a naked adult concept as well as a clothed child concept wouldn’t produce CSAM, because there is nothing in either of those datasets that looks like CSAM, so it doesn’t know what that looks like.
How is it child sexual abuse content if there’s no child being abused? The child doesn’t even exist.
Exactly. Assuming this article means the American government when it says “government”, the First Amendment firmly protects entirely fictional accounts of child abuse, sexual or not. If it didn’t, Harry Potter would be banned or censored.
How do you train an AI to generate CSAM? You first need to feed it CSAM.
Did you not read anything else in this thread and just randomly replied to me?
It is the product of abuse though. Abuse materials are used to train the ai.
No they aren’t. An AI trained on normal every day images of children, and sexual images of adults could easily synthesize these images.
Just like it can synthesize an image of a possum wearing a top hat without being trained on images of possums wearing top hats.
They’re talking about a Stable Diffusion LoRA trained on actual CSAM. What you described is possible too, it’s not what the article is pointing out though.
I can get “great” results trying to generate naked child with standard models for Stable Diffusion. They are not trained on abuse material. But they infer naked child on hentaï. Actually, it’s more of a problem. Most of the time I have to fight the generator not to generate sexy women. And generating sexy women you have sometimes to fight for them not looking too young.
That’s because the example they gave either a) combines two concepts the AI already understands, or b) adds a new concept to another already understood concept. It doesn’t need to specifically be trained on images of possums wearing top hats, but it would need to be trained on images of lots of different subjects wearing top hats. For SD the top hat and possum concepts may be covered by the base model datasets, but CSAM isn’t. Simply training a naked adult concept as well as a clothed child concept wouldn’t produce CSAM, because there is nothing in either of those datasets that looks like CSAM, so it doesn’t know what that looks like.