- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.
Uh, well this one tells you if an image looks like it or not. It doesn’t generate images
If it knows if an image looks like it it can generate something like it, one step further