After a spy camera designed to look like a towel hook was purchased on Amazon and illegally used for months to capture photos of a minor in her private bathroom, Amazon was sued.
The plaintiff—a former Brazilian foreign exchange student then living in West Virginia—argued that Amazon had inspected the camera three times and its safety team had failed to prevent allegedly severe, foreseeable harms still affecting her today.
Amazon hoped the court would dismiss the suit, arguing that the platform wasn’t responsible for the alleged criminal conduct harming the minor. But after nearly eight months deliberating, a judge recently largely denied the tech giant’s motion to dismiss.
Amazon’s biggest problem persuading the judge was seemingly the product descriptions that the platform approved. An amended complaint included a photo from Amazon’s product listing that showed bathroom towels hanging on hooks that disguised the hidden camera. Text on that product image promoted the spycams, boasting that they “won’t attract attention” because each hook appears to be “a very ordinary hook.”
I don’t really see the drawback of them being required to reasonably vet everything they are selling. I don’t know where to draw the line, exactly, but I’m not suggesting they need to product test everything to make sure it’s okay, but in this case where it’s clearly being created and advertised in such a manner they should assume some responsibility. Many times they are actually the seller in these, if not at least a broker. They are very much involved directly in the transaction. I don’t see much of drawback from this, but I could be missing something.
As for moderation, as far as I can tell, the whole idea of an “online message system” completely falls apart if platforms are responsible for everything said on the platform. It would require every post to be moderated, and that is (or was, at least) just infeasible. Well, maybe no with AI. . .but is that any better?