I don’t know if you need this info, but I was pretty disturbed to see unexpected child pornography on a casual community. Thankfully it didn’t take place on SLRPNK.net directly, but if anyone has any advice besides leaving the community in question, let me know. And I wanted to sound an alarm to make sure we have measures in place to guard against this.

  • Dr. Wesker@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    18
    ·
    9 months ago

    Unfortunately I saw it as well while scrolling, and reported it. What’s the motivation behind posting fucked up shit like that?

    • Andy@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      9 months ago

      I don’t know the specifics, but trolling is trolling. It’s experimenting with ways of breaking things. Not only do they probably find it funny, but if this isn’t handled it can kill the platform. If they saw that Lemmy.World was defederated and shut down, that would make their day.

      The point is that we need basic security measures to keep Lemmy functioning. I don’t think this is just an issue of moderator response times. We need posts like that to get deleted after 10 people downvote it, and we need limits on how easily new accounts can get into everyones’ front page feeds.

      • MrMakabar@slrpnk.net
        link
        fedilink
        English
        arrow-up
        11
        ·
        9 months ago

        It should be reports and limited to users with some form of track record on the platform. So posted some time earlier, has gotten X likes, account age and similar measures to make sure it is not problematic.

        Downvotes are a bad measure. They are often just done by somebody disagreeing with a post, which often are not exactly a problem. Also 10 is really low, when something really takes off. On the c/meme half the posts have more then 10downvotes, but nothing is really all that bad.

        • mars296@kbin.social
          link
          fedilink
          arrow-up
          6
          ·
          9 months ago

          The best suggestion I have seen is to have a specific report category for CSAM. If a post is reported for CSAM x number of times, the post is hidden for moderator review. If it is a false report, the mod bans the reporting accounts.

          Another issue is that post links can be edited. Trolls will definitely use this feature for abuse.

          • silence7@slrpnk.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            Ranking algorithms need to be adjusted so that if a post is removed like this, and then restored, it gets the same number of views it otherwise would have. Without that, a user-interaction driven automatic removal will get abused at scale.

        • activistPnk@slrpnk.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          9 months ago

          You can see below in this thread herein that someone got >10 downvotes for a perfectly reasonable and civil post by people who merely disagreed with their comment. Automatic censorship would be overly interventionalist. I would not want to participate in a forum that auto-censored like that. Downvotes did their job – pushed the msg low on the page for reduced visibility.

      • Rooki@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        9 months ago

        Your idea with restricting who can get into front page, is a really great idea! I will write it down for a project of ours.

      • AnonTwo@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        9 months ago

        If the same trolls got 10 accounts, they could find some other way to exploit the security gap, and also delete any posts warning about it.

        Maybe it would help if communities could turn off image uploading? I mean asklemmy doesn’t hardly ever has a reason for there to be a picture. Communities that need it of course would still need other security measures.

      • MartianSands@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 months ago

        The problem with an automatic delete is that it’s just as exploitable. Anyone can set up 10 accounts on various hosts, or even on one host, and gain the power to instantly delete anything they like

        • loobkoob@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          9 months ago

          The alternative is requiring 24-hour moderation, which isn’t really feasible unless moderators are paid employees, or just having to deal with posts staying up until a moderator/admin comes online and can sort them out. Communities can obviously try to have a mod team comprised of people from a range of time zones to increase coverage, but aiming for 24-hour coverage would make most mod teams far larger than is particularly necessary for the size of most communities at the moment.

          Posts being removed and flagged to moderators for review if a certain report threshold is met is the best middle ground for a community-run, non-commercial forum. Sure, someone can set up 10 (or however many the threshold is set at) accounts and report a post on all of them to have it removed until a moderator is online, but is it really worth it to go through that effort just to get a post taken down for a couple of hours before it gets reinstated?

          It’s the best way to allow the community to self-moderate, I think, rather than requiring all the moderation power be in the hands of those with a moderator role.

      • MBM@lemmings.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        On top of what’s been said, it should probably not delete but just hide it, so that mods could still re-approve it in case of mistakes

  • Po Tay Toes@lemmy.sambands.net
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 months ago

    I was pretty disturbed to see unexpected child pornography on a casual community

    I recommend instance admins have a minimum of discretion when federating, or users move to an instance that blocks lemmy.world to significantly decrease this risk.

    Thankfully it didn’t take place on SLRPNK.net directly

    If you saw it, it’s federated.

    I wanted to sound an alarm to make sure we have measures in place to guard against this.

    There is no reasonable measures or administration tools to combat this (that I’m aware of) beyond simply defederating. Even blocking repeat offending communities will still transfer the illegal images to ones home instance - But nobody will know.

  • GrassrootBoundaries@slrpnk.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 months ago

    As a mod of a few communities I’d just turn off public posts and contact the admins to block any trouble accounts, luckily I haven’t seen anything yet

    • Andy@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      Can you explain? I don’t know what it means to turn off public posts?

      • GrassrootBoundaries@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        As community moderator you can apply a setting that means only mods can post on that community and it’s not possible for anyone to post anything.

  • activistPnk@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    What’s interesting about this is #LemmyWorld uses Cloudflare, and CF was involved in a CP scandal. You might be tempted to report the CP to Cloudflare, but it’s important to be aware of how CF handles that. CF protected a website that distributed child pornography. When a whistle blower reported the illegal content to CF, CF actually doxxed the people who reported it. Cloudflare revealed the whistle blowers’ identities directly to the dubious website owner, who then published their names and email addresses to provoke retaliatory attacks on the whistle blowers! Instead of apologizing, the CEO (Matthew Prince) said the whistle blowers should have used fake names.

      • Rooki@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        9 months ago

        You are right on the point! We are all do this in our free time and we are searching for admins that are free in a timezone we still dont have covered yet.

        We are open if someone is interested in assisting us, just hit us with an email with some details about you and when you can be active on lemmy.world.

    • Andy@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      9 months ago

      That’s pretty shocking.

      What tools are available to us to manage this?

      • poVoq@slrpnk.netM
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        9 months ago

        The best tool that is currently available is lemmy-safty AI image scanning that can be configured to check images on upload or regularly scan the storage and remove likely csam images.

        It’s a bit tricky to set up as it requires an GPU in the server and works best with object storage, but I have a plan to complete the setup of it for SLRPNK sometimes this year.

        • silence7@slrpnk.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 months ago

          This is probably the best option; in a world where people use ML tools to generate CSAM, you can’t depend on visual hashes of known-problematic images anymore.