Feedback on community moderation #3317
Replies: 6 comments 16 replies
-
There are a lot of good suggestions here. Adding some of my own thoughts: Personally I think users should be able to block users with certain labels instead of lists being made more prominent on a moderation profile. Labels are what moderation accounts use to provide data to the user, and the user should be able to act on that data alone without the need for moderation lists. My number one request for moderation is to let users threadgate using labels from moderation services, ideally setting a default value and being able to customize it per thread. If I'm posting about topic Y and there's a good labeler who has a label for "Y haters" then even if I don't want to fully hide/block those users, I would want to specify that in those specific threads their replies are not welcome and would not be shown to anyone, as if I blocked the users (or at least hidden like if I hid the reply). If many people did this for their threads, this would make the "community" aspect of "community moderation" a lot more prominent. I really think that consensually saying "I want this labeler to protect my thread" is what could make third-party moderation actually viable. I know the developers have not wanted to make third-party moderation services too visible now in their early stage, I still think it'd be nice to see what services a user has "liked" on their profile. This could also be visible in Ozone for a user, as for certain labelers that could be a signal for "I consent to be moderated by this service" or "I am in need of protection by this service." |
Beta Was this translation helpful? Give feedback.
-
These are good suggestions. |
Beta Was this translation helpful? Give feedback.
-
Signing onto this for Skywatch! |
Beta Was this translation helpful? Give feedback.
-
I'm working on a long reply to this, but it isn't ready quite yet. |
Beta Was this translation helpful? Give feedback.
-
There is a lot here! We will probably have some separate writing in coming months about the overall stackable moderation situation, categories of problems, and proposals to address them. This post is more of a point-by-point reply, though I might not hit them all. Report routing, meaning that users send (and mod services receive) reports which are relevant and in-scope for them, is definitely a top issue and something we are planning improvements to. This will probably not ever be perfect, and the initial improvements might be "coarse" rather than "granular". Eg, we might make things clearer by broad category of report to start, then come up with some separate solution to things like "no image posts" (or "only image posts") as a follow-on. Governance of moderation services, and safety of moderators: these are definitely big concerns! I think governance should probably be independent, not contingent on Bluesky PBC, even if Bluesky helps support and provides resources. How to assess and meta-moderate services is important. Harassment of any individuals in the network isn't acceptable, and resources for at-risk folks (like moderators) would be good. Managing trust and responsiveness between moderators and their community is just hard in general. Mechanisms to clarify scope and "constituency" could help with this, and mechanisms to improve communication and transparency, might all help, but at the end of the day managing trust is a challenge. Conceptual grouping of functionality: yup, we are discussing ways to "re-bundle" things, and clarify the intent, powers, and behaviors that services can have. For example, some projects using the "badging" functionality are lower-stakes and appropriate to run as a single developer with arbitrary audience. Other more high-context moderation work probably has a more specific relationship with a specific "constituency", and both clarifying expectations and even formalizing those relationships could help. Moderation history (aka, status of reports): we are working on that feature, as proposed. There have been a lot of other urgent tasks, but you can track progress on this in git across multiple repos if you go digging. Comms: as things are today, labelers are full accounts and can make Bluesky posts. There might be ways to improve this functionality, like "strong follows" that come through as push notifications, or making it more obvious to "follow" labelers in addition to "subscribing". Recent label actions from a service: I can see some upside to this, but it also seems like it could go wrong or frequently be unhelpful. In general, I personally think that assessment of moderation should move more away from individual cases to broader patterns. Context like "number of labels by type for each of the past 7 days" feels like it might be more helpful. For folks that really want to dig in, I think a separate web tool (instead of in-app) might be better. Ozone is a large project, and we need to balance the needs of our own mod team with independent mod teams. The surges in late 2024 required a lot of urgent features and fixes for our team. In general, having the UI and backend shared has worked well to ensure basic bugs and usability issues get solved; imagine the contrast if we were building a tool which we didn't use ourselves. At the same time, it isn't clear if that alignment will work well forever. One somewhat unexpected thing is how many independent Ozone instances have large teams which need to work queues concurrently; this is a sort of "at scale" feature we use ourselves. The "snapshot" functionality in Ozone (to cache records at time of report, to review in case of edits or deletions) has been needed for a long time, both for us and others. One-click copying sounds good, please open an issue if there isn't one. For account-level comments, it is possible in the UI to see full-account actions (including reports), though it requires some clicking from the quick-action modal. We are working on some account-level overview statistics which should help with context. For CSV exports, would a CLI tool work? The API exposes a lot of lists and data which could be pulled down via Report forwarding between instances: open to something like this, but raises issues around user expectations and privacy. One option would be a way to send back a message to the user recommending that they report to a different service. Giving specific projects or individuals special attention when passing along reports will require careful thought and policies: if we treat any report coming from a labeler as special, that rapidly sets up an incentive for folks to create labelers just so they can get leverage. Slack notifications: we use external tooling for this. Different teams use different tools (Discord, email, IRC, telegram, matrix, etc), and I think keeping "integrations" external as plugins tends to work out better in the long run. Could be wrong though. Assigning tickets to individuals: this is a workflow pretty different from how we use the tool. It would be possible to approximate this with tagging, but if this overall modality is more important than things like tags, it might require a different architecture or alternative tooling. Multiple people working queues concurrently: Ozone does have basic functionality for this, and we use it heavily. It isn't very ergonomic to set up, and not documented, we might iterate on it. |
Beta Was this translation helpful? Give feedback.
-
Here are a few things on our radar which I don't think you mentioned: Persistent confusion about "account-level" and "post-level" actions (and sometimes profile-record-level as a third option). This certainly applies to how badges are displayed in the app, but they also impact Ozone usability. The out-of-box setup process isn't as smooth as it could be. Particularly around debugging label stream issues between Ozone and AppViews. Basic built-in statistics around report volume and action rate would be helpful for everybody. Queue filtering and sorting by priority, including leveraging the "tag" functionality in a generic way. More documentation and examples around how to use Ozone for infrastructure-level moderation, now that alternative PDS instances, Relays, and AppViews are becoming more common. Moderation lists are quite powerful, and there isn't a mechanism to "override" them for individuals today. Sometimes this is requested as "boolean combinations of lists" or "allow lists". One option would be weaking the "block" functionality of mod lists compared to regular blocks, and allowing "follow" relationships to override (including for visibility). Re-visiting label rate-limits. This might just mean justifying the current limits better, if we can't increase them. |
Beta Was this translation helpful? Give feedback.
-
This post is a consolidated feedback from operators of multiple moderation accounts. We understand that changes we are proposing would take a lot of careful work to design and implement. But the current state is not sustainable and a change is sorely needed.
If you are running a moderation account and would like to have your signature added at the bottom - ping
@imax
in the#moderation
channel on Bluesky API Touchers Discord.Number one request
Customizing report flows. Hardcoded categories don't make sense for almost every moderation account out there. This results in user confusion and operators receiving a lot of out-of-scope reports.
This would also necessitate removal of the ability to send a report to multiple moderation accounts at once, since selected categories wouldn't necessarily exist for some of them.
Also add to this:
Organizational & safety of operators
We need you to start cooperating with moderation account operators w.r.t. forwarding reports: have an established communication channels and policies/guidelines. Obviously, nobody expects you to immediately trust anyone who starts a new moderation account out of the blue. But you're still responsible for platform-wide moderation and we need to have clear communication on what you would and would not handle.
As for safety, community moderators are currently doing a lot of work for free, while being at an increased risk of harassment. This needs to change, harassment of individual moderators must be dealt with swiftly. The worst thing moderation accounts should worry about is users no longer trusting them, not being attacked by angry mobs.
Conceptual changes
Abstraction of a "labeler" might've made sense for Bluesky internally, where it is a part of moderation infrastructure that you're obliged to have. But releasing it publicly as-is feels rushed and not fitting to capabilities and assumed responsibilities of community moderation accounts.
We propose introducing a new account type: "moderation account". By itself this flag probably shouldn't do anything aside from hinting the UI that this not a regular microblogging account. What would make this flag meaningful is the ability to add more building blocks on top:
Allowing the account to select which it does want to provide and not use the rest would make for a more coherent experience. Part of it would be showing selected lists with a UI similar to label configuration, along with the ability to specify the default subscription option (none/mute/block) that takes effect automatically upon subscribing to the moderation account.
It would also make sense to break apart "labels" and "accepting in-app reports" into two independent pieces: account that only maintains lists might still want to accept reports, and not every labeler might want to receive them.
Social contract between moderation account and its users
Labelers have had the ability to emit
!hide
label for so long, that now it's an integral part of their toolkit. It is however still obscure and potentially misleading to users.We propose to make it a more explicit part of the agreement between community moderators and their users. Keeping in mind that users still have the option to unsubscribe from the labeler altogether, allow moderators to disable some of the options of their advertised labels and lists, effectively enforcing subscription to them as a condition for continued use of their services.
Once this is implemented, start completely ignoring any undeclared labels emitted by a labeler. This would make it a lot more transparent to users, and support the use case that currently exists for the
!hide
label.Important to note here that there are still cases where a labeler might want to have a generic force-hide label for things that are harmful, but don't necessarily fit into any of the more specific labels (e.g., abuse of the labeler itself). Moderation account should be able to declare such a label; whether it can be named
!hide
or not is to be decided.Communication and transparency
Add an API for users to see the list of their reports and feedback on them. We've been long promised this, and there's even a proposal published, but it didn't get anywhere so far.
Important
The following paragraph is an over-arching concern that must influence the design of all other changes.
Another important thing on the communication front: from the very start take into account that label definitions, set of lists, and their default settings are mutable. Think through how to communicate the changes in them to users. Currently existing interfaces and functionality are geared towards "fire and forget" approach: any changes made from the moderation account side are not brought to users' attention and often go unnoticed.
Few other things that are also needed:
Ozone improvements
Finally, some requests for Ozone specifically:
Signatures
Beta Was this translation helpful? Give feedback.
All reactions