Fediverse of Truth: Disinformation in the Fediverse in the Post-truth Era
/2025-06/session/6-c/
Convener: Eugenus Optimus (@ujeenator@ujeenator.net)
Participants who chose to record their names here:
- Seth Goldstein (@phillycodehound@indieweb.social)
- johannab@<many places> (cosocial.ca, pxlfd.ca, etc)
- Jayne Samuel-Walker (@tcmuffin@toot.wales)
Notes
Presentation: https://nextcloud.ujeenator.net/s/QLpMCCqbeQGMbwj
Internet doesn’t feel right lately.
Dead Internet Theory –> feeling like this more and more now-a-days.
~1/2 of internet traffic is bots (Good bot or Bad bot)
Humans on the internets were growing but now declining
Bad bot’s are gaining ground
57% has been said is AI-influenced or made
Trend line is going toward more bad bots and bot created content
AI is everywhere!!!!
What about more education on spotting AI? Will this help or will AI become indistinguishable soon?
Notable patterns to spot bot/AI activity, currently:
- firstname-lastname-numbers combo for username
- when asked “why are you creating an account?” moderator notes pattern where the response references user referals from accounts that dont exists
- ability to jailbreak aka get the bot to spit out its instructions or give other response that exposes them as AI eg by asking specific questions in comments/replies
We were introduced yesterday to the term “grey bricking” - making ourselves utterly uninteresting to other agents on the internet.
- what are the tools to do that?
- what are the social constructs that can do that? For example - hachyderm, wandering.shop, cosocial.ca are among instances that have a “barrier” to joining, but one that humans can manage
- who are the luminaries with domain expertise in this, who we can learn from ? I’m thinking of a couple mostly on Bsky just now - Conspirador NorteƱo and Rahaeli. Very good learning providers.
- we need some campaign, analogous to public health info on handwashing or food safety, on cognitive public health. I have no idea if those of us here can enact that from within.
- unfortunately we have passed a tipping point - our activities need to include mitigation and adaptation, because repair/prevention is already too little too late
Worth a read on misinformation from state-sponsored actors:
- https://secondaryinfektion.org/
- There was another similar campaign
Eugenus introduced essentially the concept of web of trust.
Emelia noted Web of Trust doesn’t solve this issue, as they’re easily gameable and manipulated, especially if there’s a financial incentive. e.g., you use your account with an application that has write:follows scope, then the owner of that application may be manipulated for economic or political reasons to quietly manipulate your follow graph without your knowledge.
We’ve already seen such with Cambridge Analytica.
This is not specific to our domain, but is very thought provoking, and has some applicable commentary that developers and community convenors should consider in their contexts: https://how.complexsystems.fail/
We are, and we are within, layers of complexity, not an open/shut situation. Importan things I have pulled out of this: you cannot eliminate human factors; you must have feedback, redundancy, and contingencies to reach the purpose of your system.
We are out of time, but I (@johannab) am recalling an interesting operation from 10+ years back on Ravelry, where a massive database of content was revied and classified by a brilliant human engineering ploy. But it had some particular characteristics. It’s an interesting thing to think about, from the “moderator” view. Happy to share later.