Session: Can we do better than shared blocklists?
/2023-09/session/1-c/
Convener: R.W. Gehl, Roel Roscam Abbing (@rra@post.lurk.org) and Emelia Smith (@thisismissem@hachyderm.io)
Participants who chose to record their names here:
-
Jaz-Michael King (@jaz@mastodon.iftas.org)
-
@db0@lemmy.dbzer0.com
-
Rishav (@xrisk@treehouse.systems)
-
Matthias Pfefferle (@pfefferle@notiz.blog)
-
Laurens Hof, Fediverse report (@laurenshof@firefish.social)
-
Bob Wyman (@bobwyman@mastodon.social)
-
Jordan Frank, Meta (@jwf@cybervillains.com)
-
Ben Pate (@benpate@mastodon.social)
-
Jesse Baer (@misc@mastodon.social)
-
Nathalie Van Raemdonck (@nvraemdonck@hci.social)
-
bumblefudge (@by_caballero@mastodon.social)
-
mhoye (@mhoye@mastodon.social)
-
James Marshall (@jamesmarshall@sfba.social)
-
Simon Blackstein, Meta (@sblackst@ioc.exchange)
-
Andreas Savvides, Meta (@andrs@mastodon.social)
Notes:
-
instance level blocklists have been around since 2018 (at least in the form of CSVs)
- we’ve also had internal moderation notes since 2018, along with moderator audit logs
-
why might we need to have something else?
- blocklists essentially have limitations in that they’re very prescriptive, and are an “all or nothing approach”, we also don’t currently have good mechanisms for distributing and synchronising these lists which are very often only available in Mastodon CSV format, and maybe available in source code hosting platforms.
-
what do we get by federating?
- Emelia: I didn’t mention this in the talk, but we’ve currently two main models of federation: limited federation and open federation. Most instances start up as open federation, as that’s the default and limited federation requires an explicit list of “allowed” instances to federate with to be known and set ahead of time. There is potentially a third option here which is “federation requests” where an instance can try to federate with you, but their actions get put in a queue to be processed later after a moderator has approved federation.
-
potentially different architectures
-
moving away from allow/block (particularly at domain level)
-
moving towards recommendations/tagging
-
Currently tools are focused on domains, we need to move away from domain blocks and move towards moderation advisories, and moderation recommendations. (This will essentially give us an approach closer to OSINT or CVEs) Moving away to the allow / deny list approach. Moving towards something more like firewall netfilters.
-
This means that you’ve essentially four actions you can take for anything
-
Allow: allows federation with this entity
-
None: I have no opinion about federation with this entity
-
Filter: allows Federation with this entity, but apply these filters to it (e.g., preventing private posts, CW’ing media, etc)
-
Reject: prevent all federation with this entity
-
Q: does this design apply to media and attachments as well?
-
any URI-addressable content, including images
-
perceptual hashes would work where URIs/URLs are lacking for deduping bad media
-
“firewall” approach seems to hold just fine for media IMHO unless i’m missing something
-
Q: how is this diff from pleroma msg rewrite functionality
-
it’s in the prior art
-
mrf allows to rewrite the entire message, which allows you to potentially rewrite the entire message (e.g., changing what a remote user has actually said: this can be dangerous and produce misinformation)
-
mrf does not make a diff between advisory, recommendation and the application of that recommendation
-
reasons are also included and labled with structured data
-
content-negotiated documents as tags (easy aggregation of multiple vocabs)
Remark to think about general annotation facility which is separate from specific moderation tooling.
Suggestion of a split between for following
-
make it possible for people to make statements about a wide variety of objects
-
describe the system that respond to information
this allows it to expand it beyond just denylists
so make it super generic and then build a system based on this that does moderation/blocking
I suggest: (Bob Wyman)
-
adding “Annotation” to the AS/AP vocabulary. Annotations would allow anyone to associate information with another AP object. Annotations would have types. One type might be “Moderation Signals.” Other types would be defined to support a wide variety of other applications whiich need an ability to “make statements about objects” which are not Replys.
-
Relying on the data which is communicated in Annotations, various instance-specific systems might decide to block or otherwise modify some objects that flow though the system.
-
Emelia: Some problems with adding to AS/AP is in ensuring federation happens as expected, and with regards to the speed at which the specifications evolve and get agreed upon. We need a solution now, not in a year’s time (though we certainly need people thinking about the longer term picture too)
How do instance moderators communicate with each other on potentially blocking each other/when admins of an instance are not moderating their instance well?
Emelia: There are Backchats with rooms of admins; having these conversations with admins
When you receive a report from a remote instance, cannot give that remote instance any real reply. Can only leave a note on a report, have a conversation on the report, attracts an audit log. we should probably add federation to moderation notes; not just talk to internal team about moderation report, but talk to origins team about the report. “for this report, this is why you should take some action”
Giving moderators a built-in means for communicating between instances is needed.
-
We may want to limit discussion to moderation reports and actions, as to avoid arbitary messages of low relevance from clogging up this sort of timeline.
-
A related aspect here is giving each instance a @operator and @moderator actor, such that regular users can flag stuff to moderators (though I’m not sure if we need this)
The other part of this is being polymorphic with Reports: currently only Pixelfed supports this, but Mastodon and other fediverse software will have to: we need people to be able to report more than just posts, for instance, reporting links, hashtags, images, etc. Right now Mastodon treats all reports as reports about a specific account, with some associated statuses, this leads to cases where punitive action is perhaps taken against the account instead of just the content.
If you’d like access to the proposals I’m currently writing, please drop me a line on @thisismissem@hachyderm.io and I’ll try to either include you in the peer-review process or add you to a list of people to notify post peer-review.