@fediforum@mastodon.social

CSAM (Child Sexual Abuse Material) on the Fediverse

/2024-03/session/3-c/

Convener: Jayne Samuel-Walker (@tcmuffin@toot.wales)

Participants who chose to record their names here:

Convener notes and resources

What is CSAM?

Extent of CSAM

Jurisdictions

Prevention

Detection

Resources

Discussion notes, key understandings, outstanding questions, observations, and, if appropriate to this discussion: action items, next steps:

I will be hugely grateful for anyone who takes notes, as I’m still on holiday in west Wales and working off my iPad…so thank you 🙏

This session is a follow-up from Jaz’s session on IFTAS and I’ll be focussing on CSAM (Child Sexual Abuse Material) in the Fediverse.

Different laws for different jurisdictions

CSAM detection as described by Emelia Smith:

  • Fedi instances allow IFTAS to scan their own media content via Webhooks, our software then create hashes for each media file.
  • Hash comparison handled by Thorn.org. Media files never leave IFTAS, only hashes are transferred to Thorn.
  • Cryptographic hashing is easy, but can be circumvented by simply rotating, warping or color-changing the image, or changing metadata on the image.
  • Perceptual hashing algorithms are the only way to reliable detect content, as media may be cropped, colourised or decolourised, etc using other programmes.

Downside: Perceptual hashing is cpu intensive and only works against known material.

There is also LLM / ML based models for identifying CSAM and related content, these often require significantly more computing power, and also require significant tuning to ensure they work as expected without too many false positives. db0 (fediseer, haidra) does have software for pict-rs for lemmy to detect CSAM this way.

StopNCII.org: Stop Non-Consensual Intimate Image Abuse — not part of the Thorn Safer datasets, uses the Meta perceptual hashing algorithms

Mastodon: currently no easy way to integrate detection due to inaccessible image upload process.

Lemmy: implementing detection with PictRS

Pixelfed: in talks about implementation

PhotoDNA, implementable on own servers, required a license from microsoft for using PhotoDNA’s libraries, also requires the database of hashes, which are usually tightly controlled..

Reporting UI is usually not great, often unable to classify reported media as CSAM. CSAM reports > media should be blurred, greyscales, and/or sepia tones, and only visibile after moderator interaction, as to shield your moderators from harmful content.

As a lot of Mastodon instances are based in Germany, any CSAM you find you must delete straight away as you can be arrested and charged by law enforcement if you hold it, even if thats for the purposes of reporting to law enforcement.