Nucleo’s investigation identified accounts with thousands of followers with illegal behavior that Meta’s security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
When I saw this, 2 questions came to mind: How come that this isn’t immediately reported? Why would anyone upload illegal material to a platform that tracks as thoroughly as Meta’s do?
The answer is:
All of those accounts followed the same visual pattern: blonde characters with voluptuous bodies and ample breasts, blue eyes, and childlike faces.
The 1 question that came to mind upon reading this is: What?
after contact, the company acknowledged the problem and removed the accounts
Meta is outsourcing content moderation to journalists.
Meta profits from these accounts, it also profits off scams and fraud posts, because they pay for ad space. They have literally no incentive to moderate beyond the bare minimum their automatic tools do
If a child is not being harmed, I truly do not give a shit.
How do you think they train the models?
With a set of all images on the internet. Why do you people always think this is a “gotcha”?
Meta doesn’t care about AI generated content. There are thousands of fake accounts with varying quality of AI generated content and reporting them does exactly shit.
Child Sexual Abuse Material is abhorrent because children were literally abused to create it.
AI generated content, though disgusting, is not even remotely on the same level.
The moral panic around AI that leads to implying that these things are the same thing is absurd.
Go after the people filming themselves literally gang raping toddlers, not the people typing forbidden words into an image generator.
Don’t dilute the horror of the production CSAM by equating it to fake pictures.
Although that’s true, such material can easily be used to groom children which is where I think the real danger lies.
I really wish they had excluded children in the datasets.
You can’t really put a stop to it anymore but I don’t think it should be something that’s normalized and accepted just because there isn’t a direct victim anymore. We are also talking about distribution here and not something being done in private at home.
such material can easily be used to groom children
This literally makes no sense.
Kids will do things if they see other children doing it in pictures and videos. It’s easier to normalize sexual behavior with cp then without.
This sounds like you’re searching really hard for a reason to justify banning it. Pretty tenuous “what if” there.
Like, a dildo could hypothetically be used to sexualize a child. Should we ban dildos?
It’s so vague it could apply to anything.
Banning the tech, banning generated cp on the internet or banning it at home?
I’m a big advocate of AI and don’t personally want any kind of banning or censorship of the tools.
I don’t think it should be published on any kind of image sharing sites. I don’t hold people publishing it in high regard and I’m not against some kind of consequence. I generally view prison as unproductive though.
At home, I’m not sure. People imo can do what they want behind closed doors. I don’t want any kind of surveillance but I don’t know how I would react if it got brought up at a trial, as a kind of proof if the allegations have something to do with that theme (child molestation).
I also don’t think we need much of a reason to ban it on the web.
It would probably make me distrust the prosecution, like if they’re bringing this up they must not have much to go on. Like every time a black man is shot by police they bring up that he smoked weed.
I guess my main complaint is that it’s insane to view it as equivalent to real CP, and it’s harmful to waste any resources prosecuting it.
That’s fair. We can also expect proper moderation from social media sites. I’m okay with a light touch but It shouldn’t be floating around if you get what I mean.
Some day, kids using social media will be child abuse…
That’s not surprising but it’s messed up