hhmx.de

Jerry Bell :bell: :llama: :verified_paw: :verified_dragon: :rebelverified:​

Jerry Bell :bell: :llama: :verified_paw: :verified_dragon: :rebelverified:​ (@jerry@infosec.exchange)

Föderation EN Do 01.08.2024 15:50:09

I've been participating in the fediverse for about 8.5 years now, and have run infosec.exchange as well as a growing number of other fediverse services for about 7.5 of those years. While I am generally not the target of harassment, as an instance administrator and moderator, I've had to deal with a very, very large amount of it. Most commonly that harassment is racism, but to be honest we get the full spectrum of bigotry here in different proportions at different times. I am writing this because I'm tired of watching the cycle repeat itself, I'm tired of watching good people get harassed, and I'm tired of the same trove of responses that inevitably follows. If you're just in it to be mad, I recommend chalking this up to "just another white guy's opinion" and move on to your next read.

The situation nearly always plays out like this:

A black person posts something that gets attention. The post and/or person's account clearly designates them as being black.

A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harrassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

A small army of "helpful" fedi-experts jumps in with replies to point out how Mastodon provides all the tools one needs to block bad actors.

Now, more exasperated, the victim exclaims that it's not their job to keep racists in check - this was (usually) cited as a central reason for joining the fediverse in the first place!

About this time, the sea lions show up in replies to the victim, accusing them of embracing the victim role, trying to cause racial drama, and so on. After all, these sea lions are just asking questions since they don't see anything of what the victim is complaining about anywhere on the fediverse.

Lots of well-meaning white folk usually turn up about this time to shout down the seal lions and encouraging people to believe the victim.

Then time passes... People forget... A few months later, the entire cycle repeats with a new victim.

Let me say that the fediverse has a both a bigotry problem that tracks with what exists in society at large as well as a troll problem. The trolls will manifest themselves as racist when the opportunity presents itself, anti-trans, anti-gay, anti-women, anti-furry, and whatever else suits their fancy at the time. The trolls coordinate, cooperate, and feed off each other.

What has emerged, in my view, on the fediverse is a concentration of trolls onto a certain subset of instances. Most instances do not tolerate trolls, and with some notable exceptions, trolls don't even bother joining "normal" instances any longer. There is no central authority that can prevent trolls from spinning up fediverse software of their own servers using their own domains names and doing their thing on the fringes. On centralized social media, people can be ejected, suspended, banned, and unless they keep trying to make new accounts, that is the end of it.

The tools for preventing harassment on the fediverse are quite limited, and the specifics vary between type of software - for example, some software like Pleroma/Akkoma, lets administrators filter out certain words, while Mastodon, which is what the vast majority of the fediverse uses, allows both instance administrators and users to block accounts and block entire domains, along with some things in the middle like "muting" and "limiting". These are blunt instruments.

To some extent, the concentration of trolls works in the favor of instance administrators. We can block a few dozen/hundred domains and solve 98% of the problem. There have been some solutions implemented, such as block lists for "problematic" instances that people can use, however many times those block lists become polluted with the politics of the maintainers, or at least that is the perception among some administrators. Other administrators come into this with a view that people should be free to connect with whomever on the fediverse and delegate the responsibility for deciding who and who not to block to the user.

For this and many other reasons, we find ourselves with a very unevenly federated network of instances.

Wit this in mind, if we take a big step back and look at the cycle of harassment I described from above, it looks like this:

A black person joins an instance that does not block m/any of the troll instances.

That black person makes a post that gets some traction.

Trolls on some of the problematic instances see the post, since they are not blocked by the victim's instance, and begin sending extremely offensive and harassing replies. A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

Cue the sea lions. The sea lions are almost never on the same instance as the victim. And they are almost always on an instance that blocks those troll instances I mentioned earlier. As a result, the sea lions do not see the harassment. All they see is what they perceive to be someone trying to stir up trouble.

...and so on.

A major factor in your experience on the fediverse has to do with the instance you sign up to. Despite what the folks on /r/mastodon will tell you, you won't get the same experience on every instance. Some instances are much better keeping the garden weeded than others. If a person signs up to an instance that is not proactive about blocking trolls, they will almost certainly be exposed to the wrath of trolls. Is that the Mastodon developers' fault for not figuring out a way to more effectively block trolls through their software? Is it the instance administrator's fault for not blocking troll instances/troll accounts? Is it the victim's fault for joining an instance that doesn't block troll instances/troll accounts?

I think the ambiguity here is why we continue to see the problem repeat itself over and over - there is no obvious owner nor solution to the problem. At every step, things are working as designed. The Mastodon software allows people to participate in a federated network and gives both administrators and users tools to control and moderate who they interact with. Administrators are empowered to run their instances as they see fit, with rules of their choosing. Users can join any instance they choose. We collectively shake our fists at the sky, tacitly blame the victim, and go about our days again.

It's quite maddening to watch it happen. The fediverse prides itself as a much more civilized social media experience, providing all manner of control to the user and instance administrators, yet here we are once again wrapping up the "shaking our fist at the sky and tacitly blaming the victim" stage in this most recent episode, having learned nothing and solved nothing.

Doug

Doug (@doug@union.place)

Föderation EN Do 01.08.2024 15:59:59

@jerry
I agree with everything you observe, the cycle is both predictable and all too frequent.

What concerns me the most, and I will pick on Mastodon here as the predominent platform, the devs do not sufficiently consider safety as a priority, nor seemingly as a factor in their design decisions. It feels like it would take a fork to properly implement safety mechanisms to counter the apparent race to "help build engagement".

Michael Stanclift

Michael Stanclift (@vmstan@vmst.io)

Föderation EN Do 01.08.2024 16:32:15

@doug @jerry I'm going to stand up for the devs here and say that they absolutely do factor in these things, just not always in the ways that are most apparent. There are a number of features that don't get added (at least as quickly as folks demand) specifically because of their impact on user privacy, safety, security, etc. (Quote toots, for example.)

There's a triad of engagement, safety, and accessibility that has to be factored into everything. Then how those features are maintained going forward.

Jerry Bell :bell: :llama: :verified_paw: :verified_dragon: :rebelverified:​

Jerry Bell :bell: :llama: :verified_paw: :verified_dragon: :rebelverified:​ (@jerry@infosec.exchange)

Föderation EN Do 01.08.2024 16:35:56

@vmstan @doug Additionally, I am not sure what additional safety mechanisms are missing, to be honest. Perhaps making block lists more frictionless? Allowing admins to block certain words? (Which btw, would cause it's own set of backlash for flitering out legitimate use of some words)...

Renaud Chaput

Renaud Chaput (@renchap@oisaur.com)

Föderation EN Do 01.08.2024 16:46:34

@jerry word-based filtering has many many issues. As server blocklists do. Before having tools that reinforce this, we want those tools to not be invisible to users and provide some auditing. Not doing so, in our experience, creates very bad experiences for users.
Add the fact that being a federated network makes most of the things much more difficult to implement properly.
@vmstan @doug

Renaud Chaput

Renaud Chaput (@renchap@oisaur.com)

Föderation EN Do 01.08.2024 16:51:20

@jerry and this is also why we introduced the severed relationship mechanism, as well as the (still needikg improvements) filtered notification system. Now that we have those, which allow more auditing and decision visibility, we will need to able to add more tools, like blocklist syncing.
@vmstan @doug

flere-imsaho

flere-imsaho (@mawhrin@circumstances.run)

Föderation EN Do 01.08.2024 19:20:35

@renchap mind, pleroma implemented things like MRF years ago (there's a helpful thread from ariadne conill that lists the pleroma moderation/security features); mastodon frequently ignored calls for implementation of such features or delayed them for years, and rochko managed to alienate many potential contributors by doing things like dropping already reviewed pull requests with implemented features because something irritated him.
@jerry @vmstan @doug

adamrice

adamrice (@adamrice@c.im)

Föderation EN Do 01.08.2024 17:11:42

@jerry @vmstan @doug Something that might help would be allowing individuals to subscribe to curated block lists, not just admins. Not sure how disruptive that would be to the fediverse.

bumblefudge

bumblefudge (@by_caballero@mastodon.social)

Föderation EN Do 01.08.2024 21:19:58

@adamrice @jerry @vmstan @doug blockpartyapp.com/#blockpartyc
^ This was a winning model, back when twitter's API was open

tom jennings

tom jennings (@tomjennings@tldr.nettime.org)

Föderation EN Do 01.08.2024 23:40:19

@adamrice @jerry @vmstan @doug

User-subscribable block lists seems both incredibly useful and technically less challenging.

adamrice

adamrice (@adamrice@c.im)

Föderation EN Fr 02.08.2024 16:32:38

@nikclayton @tomjennings @jerry @vmstan @doug It is cool that you are working on this.

I’m not really a programmer, but it seems to me the bigger technical challenge with per-user block lists is preventing blocked griefers from seeing posts by the user with the block list. I would want to be invisible to those who I block.

Nik

Nik (@nikclayton@mastodon.social)

Föderation EN Fr 02.08.2024 17:03:01

@adamrice @tomjennings @jerry @vmstan @doug Yes. So to avoid confusion.

If you **block** an account or server on Mastodon that's what happens (docs.joinmastodon.org/user/mod).

The changes will drive client-side **server filtering**. This is not as "strong" as a block (as noted in the caveats section of the post) but it also avoids collateral damage.

The UX should make it easy for a user to go from "I am looking at a filtered post" to "I have blocked that account or server".

dynamic

dynamic (@dynamic@social.coop)

Föderation EN Fr 16.08.2024 22:26:10

@nikclayton @adamrice @tomjennings @jerry @vmstan @doug

I like the idea of user-level blocklists. I'm not enamored with the feature being confined to an app, but I guess it might be better than nothing.

An important caveat here is that if you make a *public* Mastodon post there is no way of completely preventing bad actors from viewing it, even if you block those users or instances.

Public posts can be viewed by loading their URL in any browser.

Doug

Doug (@doug@union.place)

Föderation EN Do 01.08.2024 17:56:47

@jerry
I think there are a lot of marginalised people - users, mods and admins - who would have a lot to say about additional safety features, and would appreciate being consulted in design and testing before it's released.
@vmstan

S Vermin Rose

S Vermin Rose (@rose@linuxrocks.online)

Föderation EN Do 01.08.2024 18:25:05

@jerry As we all know, Trust and Safety is hard, and a challenge is that when it fails it hits some users far harder than others. The idea that it's unduly onerous on those users to block trolls is new to me - I'm not a domain expert. But I want to hear Black voices, so their problem is, to an extent, my problem. Could a mitigation be curated block lists? I have a foggy recollection of such a facility being available on a certain legacy microblogging platform.

draeath

draeath (@draeath@social.sdf.org)

Föderation EN Do 01.08.2024 18:37:09

@jerry @vmstan @doug I've seen several people asking for some means to sync block lists cooperatively between instances. Not for post content, but for accounts etc.

Does that seem like a reasonable ask?

Petra van Cronenburg

Petra van Cronenburg (@NatureMC@mastodon.online)

Föderation EN Do 01.08.2024 20:50:55

@jerry I have no ideas about the programming. But it would help already if private blocking of people would be a real block and not only a "silencing". Often I can read everything in profiles that I blocked long before (only sometimes the profile is "not available"). So trolls often make screenshots for harassing people who have blocked them.

The other feature I liked on X: close a post completely for comments. A lot of people miss it.

@vmstan @doug

counterinduration

counterinduration (@counterinduration@kolektiva.social)

Föderation EN Do 01.08.2024 20:59:02

@jerry @vmstan @doug

I think opt out is a bad model for federation.

Doug

Doug (@doug@union.place)

Föderation EN Do 01.08.2024 18:03:54

@vmstan
I have utmost respect for the hard work if the devs, but I read the public roadmap and see barely any feature that relates to safety or accessibiliy.

I don't doubt it is going to be an aspect of some of the work, but read the original post of the thread, and where do we think anything is being actively worked or planned for that could alleviate the problems, for users or admins?

@jerry

Patrick Georgi

Patrick Georgi (@patrick@retro.social)

Föderation EN Do 01.08.2024 18:41:21

@doug Seeing Eugen's response to requests in that space over the years, I'm convinced it would take a fork, indeed. (and since it's his toy, he gets to choose what his take on Fediverse software development focuses on.)

Or moving to different software. For example, GoToSocial seems to be more interested in implementing safety features (with some done, some in the pipeline).

The observation that it takes a fork is usually where the story ends, though. I wonder how much of that is learned helplessness from similar campaigns over at Facebook, Tumblr, Twitter, Reddit et al where the only alternative to "complain and succeed in getting things on the roadmap" has been "complain and nothing happens".

The fediverse _does_ provide more options (such as forking), but they require somebody to take action.

Andrew Leahey

Andrew Leahey (@andrew@esq.social)

Föderation EN Do 01.08.2024 18:36:08

@jerry

Do you happen to maintain a list of instances infosec.exchange has defederated from?

I know you mention its only 98% effective, but for a smaller instance like us (esq.social) that might go a long way.

Jerry Bell :bell: :llama: :verified_paw: :verified_dragon: :rebelverified:​

Jerry Bell :bell: :llama: :verified_paw: :verified_dragon: :rebelverified:​ (@jerry@infosec.exchange)

Föderation EN Do 01.08.2024 18:41:23

@andrew they are available on our /about page, toward the bottom: infosec.exchange/about. I can provide them in a CSV file if you like.

Andrew Leahey

Andrew Leahey (@andrew@esq.social)

Föderation EN Do 01.08.2024 18:46:42

@jerry

No worries I can pull it down, thanks so much -- should have looked closer before bothering you. Eyes glazed right over it. Thanks!

Becky

Becky (@RenewedRebecca@oldbytes.space)

Föderation EN Do 01.08.2024 19:24:42

@jerry Honestly, I think part of the problem is thinking that server-admin moderation is the main way of taking care of trolls.

What if mastodon & other Fediverse clients let users subscribe to blocklists?

Any user could subscribe to the blocklist that matches their situation best.

There would still be a place for server moderation, too. No-one in their right minds wants to see posts from nazi.social hit their server.

Jerry Bell :bell: :llama: :verified_paw: :verified_dragon: :rebelverified:​

Jerry Bell :bell: :llama: :verified_paw: :verified_dragon: :rebelverified:​ (@jerry@infosec.exchange)

Föderation EN Do 01.08.2024 19:25:26

@RenewedRebecca that seems like a great idea!

Becky

Becky (@RenewedRebecca@oldbytes.space)

Föderation EN Do 01.08.2024 19:26:46

@jerry We’d probably need a fork to pull it off, but I think it would be worth it.

Wendy Nather

Wendy Nather (@wendynather@infosec.exchange)

Föderation EN Do 01.08.2024 19:42:36

@RenewedRebecca @jerry The problem is that blocklists cut both ways: they can be used for protection and also be abused. I trust Jerry more than I trust any random creators of blocklists.

Becky

Becky (@RenewedRebecca@oldbytes.space)

Föderation EN Do 01.08.2024 21:36:56

@wendynather @jerry I totally agree with you. Jerry’s great, but on servers that blindly use shared blocklists, you’re still running into the same problem.

The overall problem won’t be solved on a case by case basis. It’s time to overhaul one of the biggest problems the Fediverse has.

Patrick $8 :verified:

Patrick $8 :verified: (@phurd@infosec.exchange)

Föderation EN Do 01.08.2024 19:46:55

@RenewedRebecca putting the responsibility of filtering and moderation on users is exactly the problem that the users Jerry describes are facing

Becky

Becky (@RenewedRebecca@oldbytes.space)

Föderation EN Do 01.08.2024 19:57:55

@phurd that’s not what I said, but thanks for playing.

Patrick $8 :verified:

Patrick $8 :verified: (@phurd@infosec.exchange)

Föderation EN Do 01.08.2024 19:59:33

@RenewedRebecca you didn't say "Any user could subscribe to the blocklist that matches their situation best."?

Becky

Becky (@RenewedRebecca@oldbytes.space)

Föderation EN Do 01.08.2024 21:33:54

@phurd

I also said that there’d still be a place for server-based moderation too, for example, I don’t think there’s anybody on truth.social that 90% of us want to see here.

Community-based blocklists don’t preclude server-based ones.

The problem that server-shared blocklists always runs into is that one or two really active people will inject their own politics into the process. If you have a known-TERF as one of the “trusted sources”, guess what group gets screwed?

The other problem with 100% server based blocklists is that just about nobody is going to block .social. And that particular server is pretty problematic.

Subscribing to a community-based list spreads the moderation out to a whole bunch of diverse people so that no individual user isn’t in the “well, you can just block the bad guys yourself” situation.

We’re not going to win with an either/or kind of system. It’s got to be both/and instead.

Patrick $8 :verified:

Patrick $8 :verified: (@phurd@infosec.exchange)

Föderation EN Do 01.08.2024 21:52:33

@RenewedRebecca I think Jerry's original post accurately describes the problems with both server-based and user-based moderation. The insult-to-injury portion of the issue is that the impacted users are directed to assume the responsibility of user-based moderation. So while additional user-based moderation tools may be helpful, they are also an additional barrier to entry as part of the Mastodon sign-up process and provide additional ammo to the "fedi-experts" as Jerry called them

One advantage and under-developed feature of the fediverse is migrating to another server. Improving the migration workflow will allow users to leave instances who are not using their server-based moderation tools sufficiently. This could potentially be implemented with an invite system to facilitate the conversation with users describing their experience with harassment.

Becky

Becky (@RenewedRebecca@oldbytes.space)

Föderation EN Do 01.08.2024 21:58:07

@phurd Again, since you don’t quite seem to be listening, I am not advocating for user-based moderation.

I am advocating for community-based moderation.

Heck, when signing up, a new user could be given a nice curated group of blocklists to subscribe to. We spread the moderation task across the entire community instead of placing the burden on the victims.

Making it easier to move instances would be great for a lot of reasons, but it doesn’t remotely solve the racism-on-fedi or transphobia-on-fedi problems. (Not to mention the .social lack of moderation problem.)

Patrick $8 :verified:

Patrick $8 :verified: (@phurd@infosec.exchange)

Föderation EN Do 01.08.2024 22:01:51

@RenewedRebecca Users making decisions about what blocklist to subscribe to is a burden. I consider this decision to be part of user-based moderation since it is an action taken by users individually. It's fine if you disagree about that

Mono

Mono (@mono@mastodon.world)

Föderation EN Do 01.08.2024 20:34:58

@RenewedRebecca @jerry That is already possible if your instance allows user applications. You can read and write your user blocklist using the API. So you can already automate it today. There is just no service a user can subscribe to that does this for him.

Becky

Becky (@RenewedRebecca@oldbytes.space)

Föderation EN Do 01.08.2024 21:35:00

@mono @jerry Which is great, but shouldn’t this be solved for everybody, instead of expecting people to have yet one more thing to investigate before signing up?

Magnesium

Magnesium (@magnesium@infosec.exchange)

Föderation EN Do 01.08.2024 22:29:45

@RenewedRebecca @mono @jerry I'm genuinely curious how this would be constructed in practice.

Would a server get onto blocklists based on percentage of bad actors, or is there some sort of cross-instance moderation board that you raise complaints to and if those go unaddressed that instance ends up on a blocklist?

Becky

Becky (@RenewedRebecca@oldbytes.space)

Föderation EN Do 01.08.2024 22:36:38

@magnesium @mono @jerry

I think in general, we should focus on blocking people, not servers. At the same time, if an entire server is terrible, then it should be an instance admin problem.

So, let’s say you’ve subscribed to the “no MAGA” blocklist, and a new fan of the Evil Orange joins .social… It’s not going to take very long before he’s found and dropped into the bucket.

But yeah, you bring up a good point… If the dude above gets put on the no-MAGA list, let’s say incorrectly, there’d have to be some sort of way for him to appeal.

Nik

Nik (@nikclayton@mastodon.social)

Föderation EN Fr 02.08.2024 16:17:19

@RenewedRebecca [Removed Jerry, as he's already been notified about this in a different part of the thread]

I think you're going to be interested in mastodon.social/@pachli/112892