Nucleo’s investigation identified accounts with thousands of followers with illegal behavior that Meta’s security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    15 hours ago

    When I saw this, 2 questions came to mind: How come that this isn’t immediately reported? Why would anyone upload illegal material to a platform that tracks as thoroughly as Meta’s do?

    The answer is:

    All of those accounts followed the same visual pattern: blonde characters with voluptuous bodies and ample breasts, blue eyes, and childlike faces.

    The 1 question that came to mind upon reading this is: What?

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    23 hours ago

    after contact, the company acknowledged the problem and removed the accounts

    Meta is outsourcing content moderation to journalists.

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 hours ago

      Meta profits from these accounts, it also profits off scams and fraud posts, because they pay for ad space. They have literally no incentive to moderate beyond the bare minimum their automatic tools do

  • lepinkainen@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    20 hours ago

    Meta doesn’t care about AI generated content. There are thousands of fake accounts with varying quality of AI generated content and reporting them does exactly shit.

  • FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    4
    ·
    23 hours ago

    Child Sexual Abuse Material is abhorrent because children were literally abused to create it.

    AI generated content, though disgusting, is not even remotely on the same level.

    The moral panic around AI that leads to implying that these things are the same thing is absurd.

    Go after the people filming themselves literally gang raping toddlers, not the people typing forbidden words into an image generator.

    Don’t dilute the horror of the production CSAM by equating it to fake pictures.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      edit-2
      20 hours ago

      Although that’s true, such material can easily be used to groom children which is where I think the real danger lies.

      I really wish they had excluded children in the datasets.

      You can’t really put a stop to it anymore but I don’t think it should be something that’s normalized and accepted just because there isn’t a direct victim anymore. We are also talking about distribution here and not something being done in private at home.

      • Cryophilia@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        20 hours ago

        such material can easily be used to groom children

        This literally makes no sense.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          19 hours ago

          Kids will do things if they see other children doing it in pictures and videos. It’s easier to normalize sexual behavior with cp then without.

          • Cryophilia@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            16 hours ago

            This sounds like you’re searching really hard for a reason to justify banning it. Pretty tenuous “what if” there.

            Like, a dildo could hypothetically be used to sexualize a child. Should we ban dildos?

            It’s so vague it could apply to anything.

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              16 hours ago

              Banning the tech, banning generated cp on the internet or banning it at home?

              I’m a big advocate of AI and don’t personally want any kind of banning or censorship of the tools.

              I don’t think it should be published on any kind of image sharing sites. I don’t hold people publishing it in high regard and I’m not against some kind of consequence. I generally view prison as unproductive though.

              At home, I’m not sure. People imo can do what they want behind closed doors. I don’t want any kind of surveillance but I don’t know how I would react if it got brought up at a trial, as a kind of proof if the allegations have something to do with that theme (child molestation).

              I also don’t think we need much of a reason to ban it on the web.

              • Cryophilia@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                16 hours ago

                It would probably make me distrust the prosecution, like if they’re bringing this up they must not have much to go on. Like every time a black man is shot by police they bring up that he smoked weed.

                I guess my main complaint is that it’s insane to view it as equivalent to real CP, and it’s harmful to waste any resources prosecuting it.

                • Grimy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 hours ago

                  That’s fair. We can also expect proper moderation from social media sites. I’m okay with a light touch but It shouldn’t be floating around if you get what I mean.