FediForumTM

Session: How can white people and guys improve the race and gender situation in the Fediverse?

/2023-09/session/3-c/

Convener: @jdp23@blahaj.zone

Participants who chose to record their names here:

  • narF (@narF@mstdn.ca)

  • Nathalie (@nvraemdonck@hci.social)

  • James Marshall (@jamesmarshall@sfba.social)

  • Dr.Implausible (@drimplausible@mastodon.online)

Notes

Summary:

  • social norms colliding, the tension field between federation and safer spaces

  • useful: bystander intervention best practices

  • moderation best practices and responsibilizing moderators

  • control of replies could make a big difference?

  • why aren’t their more Black people at fediforum?

  • need more space to talk about social issues – followon workshop (or whatever)?

Background on race:

Thoughts?

  • Because of the way that it’s an open platform, you can have social norm conflicts, you can isolate yourself from other instances as opposed to Twitter. There needs to be an element of transparency and opennes, we’re going to defederate from these spaces because our marginalized places. Need to have an element of social norm correction, marginalized people need to do this. Architectural problem, also a problem of people not wanting to listen to that.

  • TBD: followon with specific example

A lot of separate problems. White people kind of boosting each other, having a preferential effect – becomes harder to get visible if you’re not right. But also the opposite problem, people get exposed in ways they don’t want. Nuanced and very situational, have to be addressed individually. I want to have containers for conversations that are more about the social aspects of the fediverse. Choosing to have fediforum on a weekday is not accessible to a lot of people.

A lot of well-meaning white people who boost but it increases exposure, reply-guy ism becomes worst. Does it make it even worse all the stuff they got?

What do you do if you see reply-guyism happening? It’s a tradeoff - don’t want to increase visibility, but also don’t want leave the person on their own. One option is to respond to the reply-guy and untag the original person. You become the bait. Can DM the person saying “I’ve seen this person around, don’t take it personally.” Federation weirdness makes this worse: if somebody asks a question, 20 million people respond because they don’t see that people have already replied. Software improvement would help!

Is it helpful to have a deeper debate in a separate URL, think of it more two-dimensionally: here’s 10 ways to break this conversation down into subtopics, each having a different discusussion. There are some topics that need deep debate, can’t have that in a flat Twitter conversation. Core to any debate is having good moderation … or is it? Is it enough to be able to hide the slur so that people can see it? Opinions differ.

Tactics:

  • boosting POC in situations where people want to be boosted, avoiding it in situations where they doin’t want to, not obvious which is which

  • need best practices for dealing with Reply Guys - ask women who have talked about reply-guyism.

  • if you see somebody doing something shitty, respond to them - say “that’s not okay”. if that doesn’t happen, it becomes the norm.

‘core to any good debate is moderation’ How strong you want moderation has an effect on who will want to participate in a conversation. There are technical fixes to not have bad stuff ‘visible’ but hiding them… But, that can lead to a situation where people being targeted by racist speach don’t see it (good) but people are still talking about them behind their back (bad). BlueSky is going more that route, it’ll be interesting to see how that works as they federate.

Nathalie’s research: we have different social norms, they collide with each other. The central norms in our society are white heteronormative. If we let people self-moderate, those become the centrali norms. If you want to change that and let marginalized people to be included, have to change that. The idea that “everything needs to stay federated” doesn’t have to be there. That always leads to reinforcing the dominance. Essentially tension field between federation (“everyone can talk to everyone!”) and safe spaces (letting people determine what they find acceptable norms)

Context is important. Where is the reply happening, where is the conversation – whose house are we in? If somebody wants to branch off a conversation and say “that’s an interesting subject” and take it somewhere else, that’s one thing, but replies to a thread are in somebody else’s post. JWZ’s proposal that replies are more like blog threads. “Cyberspace is the point between two telephones”.

Limiting replies!!!!! (aka: I can limit my posts so that only my friends can reply. Non-friends just aren’t allowed to reply.). (also just shutting off comments)

You have fragmentation insocial networks. e.g. pushback from Europeans “we don’t see race the way Americans do”, Americans “you invented it”. Want to be able to push back on people saying “that’s not how we do things”, also need to appreciate different shared understandings, right now it’s invisible, how to make people more aware of this?

On TikTok, it’s the responsibility of the person creating the thread – sometimes shut it down and lock comments. Downvote has an influence. Filtering options for the conversation. Community guidelines on TikTok are very heavy, why am I getting a violation? Since I joined there I have a block early block often and don’t ask questions. Tools have a lot to teach us. But also, https://www.nbcnews.com/pop-culture/pop-culture-news/months-after-tiktok-apologized-black-creators-many-say-little-has-n1256726

Nathalie: classification of what affordances have an impact on social norm conflicts, interactiability: one-to-many, many-to-many. Twitter is fundamentally many-to-many, that’s why you need moderation, either centralized or decentralized. Fediverse has the right architecture, decentralized, but needs to be responsibility of moderators. https://cris.vub.be/ws/portalfiles/portal/92575001/TRANSLATION_Conceptual_framework_for_interaction_of_platform_features_FINAL.pdf

TBS - centralized? No

How close are we to AI for doing moderation. AI per instance. An instance’s use of users programming how they want the AI to act. But https://futurism.com/the-byte/google-hate-speech-ai-biased. Who gets to train the AI?

How to transfer effort to privileged people?

Why are there so few Black people attending the FediForum?

  • timing - during the week is hard for people who are working

  • issues of tone that come across, feels like a white event, the organizers have contributed to it by their feedback to suggestions

  • format might not be appealing

  • very technical so pushes out the social – e.g. blocklist session was all about the AP

  • price, even $2 is a lot for some people – might be free tickets for those who request, but requesting that’s a barrier