Some suggestions for fighting bigotry on the Fediverse:
-
@FediTips Might be also worth including https://github.com/mastodon/mastodon/issues/14762.
Thanks, that's probably closer to what was being requested so I've replaced the issue in the post with that one.
-
Thanks, that's probably closer to what was being requested so I've replaced the issue in the post with that one.
@FediTips No problem!
-
Some suggestions for fighting bigotry on the Fediverse:
- Challenge and report bigotry even if you're not the target, don't leave it to victims
- Listen to victims, don't question their honesty or demand a solution from them
- Don't remain neutral, take a stand
- If you're an admin, defederate servers that knowingly tolerate bigotry
- If you're comfortable using Microsoft Github, give thumbs up for reply controls (https://github.com/mastodon/mastodon/issues/8565) & keyword flagging (https://github.com/mastodon/mastodon/issues/21306) on Mastodonp.s. The reason I mentioned the two Github issues above was because they have come up in discussions by people who feel unsafe on here due to bigotry and abuse.
There were people specifically mentioning they wanted reply controls so they could pre-emptively prevent abusers replying to their posts.
There was also concern about Mastodon's "reactive" moderation where admins have to wait for reports. A flagging system would be "proactive", allowing admins to act without waiting for reports.
-
p.s. The reason I mentioned the two Github issues above was because they have come up in discussions by people who feel unsafe on here due to bigotry and abuse.
There were people specifically mentioning they wanted reply controls so they could pre-emptively prevent abusers replying to their posts.
There was also concern about Mastodon's "reactive" moderation where admins have to wait for reports. A flagging system would be "proactive", allowing admins to act without waiting for reports.
@FediTips precrime?
-
@FediTips precrime?
No. Proactive would mean if someone posts abuse the admin potentially sees it straight away and can immediately take action without waiting for the victim to see it and report it.
At the moment Mastodon is only reactive, so the only way an admin knows about abuse is if someone else reports it, by which time it may have already been seen by the victim and the damage is done.
-
Some suggestions for fighting bigotry on the Fediverse:
- Challenge and report bigotry even if you're not the target, don't leave it to victims
- Listen to victims, don't question their honesty or demand a solution from them
- Don't remain neutral, take a stand
- If you're an admin, defederate servers that knowingly tolerate bigotry
- If you're comfortable using Microsoft Github, give thumbs up for reply controls (https://github.com/mastodon/mastodon/issues/8565) & keyword flagging (https://github.com/mastodon/mastodon/issues/21306) on Mastodon@FediTips Bigotry? Where do I draw the line between what I should tolerate and what I should start doing something about? Twitter was mostly right-wing bigotry, here it's mostly left-wing.
-
No. Proactive would mean if someone posts abuse the admin potentially sees it straight away and can immediately take action without waiting for the victim to see it and report it.
At the moment Mastodon is only reactive, so the only way an admin knows about abuse is if someone else reports it, by which time it may have already been seen by the victim and the damage is done.
@FediTips oh cool! I hope it works!
-
@FediTips precrime?
@Kierkegaanks @FediTips No, that's outside the technical capabilities of Fediverse software. The idea behind the flagging system is that it would auto-report any posts matching a certain filter and submit it to the mods for manual review. If someone posts "I hate [slur]s", it would be brought to the attention of the mods immediately, rather than after another user sees it and reports it (which can take a while)
-
@FediTips oh cool! I hope it works!
I've edited my post to make it clearer

-
p.s. The reason I mentioned the two Github issues above was because they have come up in discussions by people who feel unsafe on here due to bigotry and abuse.
There were people specifically mentioning they wanted reply controls so they could pre-emptively prevent abusers replying to their posts.
There was also concern about Mastodon's "reactive" moderation where admins have to wait for reports. A flagging system would be "proactive", allowing admins to act without waiting for reports.
@FediTips I agree with a pro-active approach. No idea technically, but let's consider that in too many huge instances, moderators experience stress and burnout. I think another possibility.
Let's assume the account receives 100 flags, disliked. Not marking it as bigot, extremist, whatever. Anyone has their own reason to flag a post/user.
It happened to me a couple days ago, when an extremist left-oriented said "better 100 dead cops each day and a global change, than nothing"...
Not for the cops, but saying "better dead than..." talking about politics, it's a very terrifying sign.
And after flags, you're warned, like a content warning "message potentially disturbing: what to do? Block the person, block the instance?"...
Not like facebook, where you can organize a big group of people reporting the user, and algorithm shuts him down. Here, EVERY SINGLE USER should see a cw (or popup or whatever) that if clicked on the post, will open: "go on reading, reply, report, block name, block instance"...
It's up to the single user. But the post's author IMHO should _not_ see they've been flagged or whatever. It's like when I've been the moderator in a Zoom room. We were all blind. One of them started to scream and insult, we muted his mic, he's screamed to the moon for minutes! -
No. Proactive would mean if someone posts abuse the admin potentially sees it straight away and can immediately take action without waiting for the victim to see it and report it.
At the moment Mastodon is only reactive, so the only way an admin knows about abuse is if someone else reports it, by which time it may have already been seen by the victim and the damage is done.
@FediTips @Kierkegaanks It's not like there is a clear line for such things, or? how would an outsider know? Maybe its an insider, parody, culture, etc.
I think it's dangerous to suggest "take action" without being asked. You may think of yourself being able to judge always right, which btw. I could not say about myself. Amyway it's like Damokles Sword, isn't it?
-
Some suggestions for fighting bigotry on the Fediverse:
- Challenge and report bigotry even if you're not the target, don't leave it to victims
- Listen to victims, don't question their honesty or demand a solution from them
- Don't remain neutral, take a stand
- If you're an admin, defederate servers that knowingly tolerate bigotry
- If you're comfortable using Microsoft Github, give thumbs up for reply controls (https://github.com/mastodon/mastodon/issues/8565) & keyword flagging (https://github.com/mastodon/mastodon/issues/21306) on Mastodon@FediTips I like the no-replies option, but Github is more than a simple hardware engineer can untangle.
-
@FediTips I agree with a pro-active approach. No idea technically, but let's consider that in too many huge instances, moderators experience stress and burnout. I think another possibility.
Let's assume the account receives 100 flags, disliked. Not marking it as bigot, extremist, whatever. Anyone has their own reason to flag a post/user.
It happened to me a couple days ago, when an extremist left-oriented said "better 100 dead cops each day and a global change, than nothing"...
Not for the cops, but saying "better dead than..." talking about politics, it's a very terrifying sign.
And after flags, you're warned, like a content warning "message potentially disturbing: what to do? Block the person, block the instance?"...
Not like facebook, where you can organize a big group of people reporting the user, and algorithm shuts him down. Here, EVERY SINGLE USER should see a cw (or popup or whatever) that if clicked on the post, will open: "go on reading, reply, report, block name, block instance"...
It's up to the single user. But the post's author IMHO should _not_ see they've been flagged or whatever. It's like when I've been the moderator in a Zoom room. We were all blind. One of them started to scream and insult, we muted his mic, he's screamed to the moon for minutes!Reports are never visible to the author, they just go to the admins and moderators who then decide if there is a problem or not.
As far as I know flags would work the same way as reports, the author of the post would never know about them, only the admins and moderators.
-
@FediTips @Kierkegaanks It's not like there is a clear line for such things, or? how would an outsider know? Maybe its an insider, parody, culture, etc.
I think it's dangerous to suggest "take action" without being asked. You may think of yourself being able to judge always right, which btw. I could not say about myself. Amyway it's like Damokles Sword, isn't it?
The flagging system wouldn't cause any action to be taken, it would just alert a human admin about certain words and phrases. The admin then uses their judgement to decide whether any action needs to be taken.
There isn't anything automated here except telling the human admin there *might* be a problem.
For example if someone is using words or phrases that are usually used as slurs, they would be flagged up and the admin would look at the context to see if it's abusive.
-
@FediTips I mean an oppurtunity to suggest and vote for features and other improvements. Based on their own platform and not on a Microsoft one.
@meissda @FediTips Someone else in this thread had a good point that "voting" for features may mean that minorities who need the features most (e.g. non-whites) may get out-voted by majorities that don't.
I think the model used by PMWiki for feature suggestions is a better one: rather than entertaining features which "might be nice", feature requests need a use-case which can be demonstrated as already existing.
-
I ixi@mastodon.online shared this topic