Convener: Jaz-Michael King (@email@example.com)
Participants who chose to record their names here:
Jeremiah Lee (@Jeremiah@alpaca.gold)
Emelia Smith (@firstname.lastname@example.org)
Edward L. Platt (@email@example.com)
Laurens Hof (@firstname.lastname@example.org)
Andreas Savvides, Meta (@email@example.com)
Simon Blackstein, Meta (@firstname.lastname@example.org)
Paul Fuxjaeger, university of vienna (@email@example.com)
Darius Dunlap (@firstname.lastname@example.org)
IFTAS Matrix Space: https://matrix.to/#/#space:matrix.iftas.org
Moderation community of practice
Moderation as a service
Support for fediverse admins
Please do not share contents of the survey findings, as they are still being reviewed and will be made public later.
CSAM jurisdiction is determined by the country where the data is stored, not the server operator’s or user’s location.
Different jurisdictions have different requirements for if operators are supposed to hold the content or delete the content once identified.
Types of legal notices: intellectual property, impersonation, CSAM preservation, defamation (for posting reasons for defederation), unspecified takedown notice from Russian government agency
Risk vector: intentional posting of content to a server in order to have server taken down.
IFTAS is creating a website to compile resources for moderators.
Q from participant: Is the sample size large enough to guide direction? Jaz: It covers a lot of accounts. Was sent to a large number of server admins and shared by accounts with a wide reach. Can only help the people who want the help.
Q from db0 about crowdfunding. Jaz: waiting for 501c3 to be confirmed and may accept crowdfunding after that. Also talking to lawyers who have pro-bono requirements
Q from participant: Inauthentic coordinated behavior? Jaz: term from trust and safety
Q from Edward: was there interest in ways to diversify moderation teams and more effectively moderate diverse communities? Jaz: Supporting marginalized communities is part of the mission of IFTAS. Several related comments from moderators. One possibility is peering, works best in small closed communities. AI/LLMs can misclassify.