Some suggestions for fighting bigotry on the Fediverse:
-
Some suggestions for fighting bigotry on the Fediverse:
- Challenge and report bigotry even if you're not the target, don't leave it to victims
- Listen to victims, don't question their honesty or demand a solution from them
- Don't remain neutral, take a stand
- If you're an admin, defederate servers that knowingly tolerate bigotry
- If you're comfortable using Microsoft Github, give thumbs up for reply controls (https://github.com/mastodon/mastodon/issues/8565) & keyword flagging (https://github.com/mastodon/mastodon/issues/21306) on Mastodon@FediTips I like the no-replies option, but Github is more than a simple hardware engineer can untangle.
-
@FediTips I agree with a pro-active approach. No idea technically, but let's consider that in too many huge instances, moderators experience stress and burnout. I think another possibility.
Let's assume the account receives 100 flags, disliked. Not marking it as bigot, extremist, whatever. Anyone has their own reason to flag a post/user.
It happened to me a couple days ago, when an extremist left-oriented said "better 100 dead cops each day and a global change, than nothing"...
Not for the cops, but saying "better dead than..." talking about politics, it's a very terrifying sign.
And after flags, you're warned, like a content warning "message potentially disturbing: what to do? Block the person, block the instance?"...
Not like facebook, where you can organize a big group of people reporting the user, and algorithm shuts him down. Here, EVERY SINGLE USER should see a cw (or popup or whatever) that if clicked on the post, will open: "go on reading, reply, report, block name, block instance"...
It's up to the single user. But the post's author IMHO should _not_ see they've been flagged or whatever. It's like when I've been the moderator in a Zoom room. We were all blind. One of them started to scream and insult, we muted his mic, he's screamed to the moon for minutes!Reports are never visible to the author, they just go to the admins and moderators who then decide if there is a problem or not.
As far as I know flags would work the same way as reports, the author of the post would never know about them, only the admins and moderators.
-
@FediTips @Kierkegaanks It's not like there is a clear line for such things, or? how would an outsider know? Maybe its an insider, parody, culture, etc.
I think it's dangerous to suggest "take action" without being asked. You may think of yourself being able to judge always right, which btw. I could not say about myself. Amyway it's like Damokles Sword, isn't it?
The flagging system wouldn't cause any action to be taken, it would just alert a human admin about certain words and phrases. The admin then uses their judgement to decide whether any action needs to be taken.
There isn't anything automated here except telling the human admin there *might* be a problem.
For example if someone is using words or phrases that are usually used as slurs, they would be flagged up and the admin would look at the context to see if it's abusive.
-
@FediTips I mean an oppurtunity to suggest and vote for features and other improvements. Based on their own platform and not on a Microsoft one.
@meissda @FediTips Someone else in this thread had a good point that "voting" for features may mean that minorities who need the features most (e.g. non-whites) may get out-voted by majorities that don't.
I think the model used by PMWiki for feature suggestions is a better one: rather than entertaining features which "might be nice", feature requests need a use-case which can be demonstrated as already existing.
-
I ixi@mastodon.online shared this topic