Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.

  • MentalEdge@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    30
    ·
    24 hours ago

    There are laws around it. Law enforcement doesn’t just delete any digital CSAM they seize.

    Known CSAM is archived and analyzed rather than destroyed, and used to recognize additional instances of the same files in the wild. Wherever file scanning is possible.

    Institutions and corporation can request licenses to access the database, or just the metadata that allows software to tell if a given file might be a copy of known CSAM.

    This is the first time an attempt is being made at using the database to create software able to recognize CSAM that isn’t already known.

    I’m personally quite sceptical of the merit. It may well be useful for scanning the public internet, but I’m guessing the plan is to push for it to be somehow implemented for private communication, no matter how badly that compromises the integrity of encryption.