A moderation proposal for Open Collective

A moderation proposal for Open Collective

Two weeks ago, we had one of those weekends… Twitter threads popping up everywhere about scammers and spammers on the platform. We got called out for a lack of moderation tools: tools that would allow our Collectives to have control over who contributes to them and interacts with them, tools to allow our community to report spam and other undesirable behavior, and more.

It's never fun to get called out, but it was totally fair. We had been getting feedback about some of these issues for a long time, and though we had some ideas of what to do about them, we had not made it a priority to implement them.

Moderation is an important feature of any platform, and Open Collective was not offering tools in line with the size the platform had grown to. We had a debt to pay - both to our current users and to anyone who might consider using Open Collective in the future - and needed to step up. Kate Beard from our Engineering Team took this on, reached out to community members and put together a moderation proposal and concrete next steps to alleviate this problem. It is now a priority for the remaining of Q3 and Q4.

Below is a summary of the moderation proposal (which you can read in full on Github). We want to continue to receive feedback on what we've planned to do ASAP, and get input on what we'd like to do going forward. You can comment on Github or join our Discord for general discussion, feedback, and urgent queries.


Our proposal

Open Collective is at particular risk as both a financial platform and as a platform based on transparency, making it a likely target for scammers, spammers, and those looking to take advantage. The Collectives we serve entrust us with both their money and their brands, and our success as a company depends on building and maintaining a reputation for safety, integrity, trustworthiness, and thoroughness.

Open Collective has had many ideas around moderation and implemented a few features, but largely moderation is still manual, is causing frustration for Collective admins, and has drawn attention to the platform negatively. It’s important to listen to this criticism and feedback; identify what we can do to improve the situation immediately and over the next few months; and realize that this will be an ongoing effort.

We need a plan that comprises:

  • short term solutions (‘quick wins’) that will alleviate the worst behaviours as soon as possible
  • medium-term solutions that will take a little more work but are also important to implement sooner rather than later
  • and long-term solutions and thinking about how we want to shape the platform as a whole as we move forward.

We want to work to keep our users’ trust, make it stronger, and grow through our reputation of having a strong community through effective moderation.

Guiding principles

  • We want Open Collective to be a platform that is trusted.
  • We want Open Collective to have high friction for spammers and bad actors.
  • We want Open Collective to have low friction for users to moderate their Collectives as much or as little as they want, with a variety of tools available to them.
  • We want Open Collective to anticipate and be proactive about moderation needs going forward, rather than reactionary.

Short-term goals/Urgent priority

1. Rejecting individual contributions

The most requested feature we have is for Collectives to be able to reject/refund a contribution that has been made, whether it’s because they see it as a spam contribution, or they don’t want that particular sponsor giving to their project for ethical reasons, etc. Currently, Hosts can refund transactions, but we want to allow this feature for Collective admins as well and also include a way for the admins to let a contributor know why the contribution is being rejected, if they feel like saying it.

2. Categorization system to allow Collectives and/or Hosts batch control over types of contributions to receive/not receive

As suggested by one Collective admin, we should work towards implementing a system that reduces the burden of individual Collectives/admins having to reject and filter contributions. One way to do this is to offer Hosts and Collectives the ability to block contributions from certain categories of contributors (such as casinos or adult websites) so that there is less work involved that individually blocking these contributors.

3. Allow Collectives to filter the sponsors they show on their website

Some Collectives pull data from the Open Collective API or use our embedded widget to display sponsors on their website. While we want to remain transparent on the Open Collective platform about the contributions going into a project, we understand that people want finer control over what they display on their own platforms/websites. We want to add a way for Collective admins to easily flag contributors or types of contributors to not display if they use our widget or pull data from our API.

4. Update ‘Moderation’ and ‘Community Guidelines’ documents to reflect our evolving approach to moderation

Our community guidelines are short, succinct, and warm and fuzzy. They might be good for a smaller project but Open Collective has grown a lot larger and evolved. In particular, the line about “We believe in self-managing systems, so we don’t police or moderate much” is ripe for updating.

We’ve seen how people use Open Collective, and unfortunately how they abuse it as well, so we should start setting down some more explicit community guidance and moderation rules based on what we’ve seen, what feedback we’ve been given, and what we might expect in the future.

We should make it more clear going forward what kind of behaviour we expect on Open Collective, what kind of behaviour we won’t accept, and what we will base our moderation decisions on.

Mid- and long-term goals

Following on from our short-term priorities, our mid-term and long-term priorities will either create additional moderation tools or build on the short-term priorities. Ideas for these include: blocking accounts from interacting with your Collective at all (through contributions, expenses, comments, etc); allowing Collectives to pre-approve all contributions or contributions from certain categories before they are accepted; allowing Collectives to filter the members they show on their Collective page; and implementing a reporting system to report spam or comments/contributions/etc that violate our guidelines.


Roadmap

Our short-term goals are being treated as 'urgent priority' to implement as soon as possible, in Q3 & Q4 of this year. They've been summarized below but you can join the discussion about them here.

The mid-term goals we hope to discuss, scope, and begin implementing in Q4 of this year as well. Discussion issue here.

The long-term goals are the most vague at this point. We have some ideas of what direction we would like the platform to move in with regards to a moderation system, but this is where we would like the most input in what our community would like to see from us in the future that is not yet included in our proposal. Discussion issue here.

We encourage our community to be actively involved in the planning of all our moderation plans so we can make sure we get it right.


Big thanks again to the community for caring and holding us accountable. We hope that you'll continue to let us know what we need to do to create the best version of Open Collective that we can.