MVP Proposal: User Trust and Reward System

While this is all true, I think you’re missing something significant. Site activity gives the background and opportunity for competent moderation, certainly. So it is necessary to have a good investment in the site’s actual normal operation, and ideally not just in the past, but ongoing (which no one has so far mentioned, I think).

However, site activity does not, in itself, motivate moderation (since many, even most active SE users don’t invest much time in moderation), nor does it guarantee that someone will learn sound judgment from their experiences. In part, this can be thought of like the difference between the ten thousand hours of a master, and the single hour repeated ten thousand times that marks the inveterate novice. You can spend a long time answering questions and still never build a generally useful mental model of what questions should and shouldn’t be answered. If you don’t care about moderating correctly, you won’t put in the effort to do so, and that’s not the same thing as answering correctly (much less asking skillfully).

So, ultimately, I believe it’s necessary to consider proven moderation ability as well as content contributions. Both are important: one shows the continuity with the primary community of asking and answering, the other shows the care invested in moderating well.

The simplest way to do this might be to have a set of standard post metrics for different levels (perhaps 15 posts with at least 80% positively Wilson scored, then 50, then 125), then match those to appropriately scaled requirements for accepted flags, edits, etc such that each new tier of a specific privilege requires meeting both matching requirements. (In each case, it’s crucial to require the user being credited for a successful moderation action be the one who initiated the action, not simply agreeing with suggestions they see. Independent flags on the same post can each count for their respective users, since there’s no way to see flags, but e.g. adding a close vote on top of existing close votes can never reliably signal sound independent judgment.)

I would also like a way to require ongoing participation, but I’m not sure how to do that well, since the granularity is a problem. Maybe some low number of (well-received) posts in the last 6/12 months, increasing slowly with each tier, perhaps even starting at 0 for the lowest tier.

4 Likes

There will always be problem cases for any rule we make up. Because a system doesn’t work well in a tiny fraction of cases doesn’t mean the system is bad or that there is a better one.

On the whole, the SE user-moderation system works pretty well.

2 Likes

What is this a counterpoint to? We’re not using rep.

I don’t think this is a “tiny fraction” of cases. Even in my relatively limited experience, I have seen several cases where someone grinds out 3k rep on mediocre answers, casts very few flags (if any), and then decides to waltz through the close vote queue and vote to leave everything open.

Moderation capability should primarily be measured by actual moderation, not content creation. SE’s system focuses too much on rep, which leads to people with no moderation track record getting moderation privileges.

4 Likes

Given the many many thousands of users on SE, the fact that we can only come up with a few individual cases of abuse actually shows how rare this problem is. No system is ever going to be perfect. No matter what we do, a year later someone will be able to point to a few problem cases. We need to keep in mind how statistically insignificant a few out of many thousands is.

Strong disagree. Another point against this not previously mentioned is that it ends up creating a “moderator class” of user. I think we want a system that is a “government of the people”. Your system encourages a class of users that get a lot of power just because they have wielded power. They will probably not see tradeoffs the same way the actual users of the system do. We want the moderating decisions reflecting the general will of the people using the system for its actual intent, not those that spend most of their time enforcing, and thereby deciding, the rules.

3 Likes

We certainly need something akin to SE’s rep to keep track of each user’s positive and negative contributions, a reflection of how much the community values that user, etc.

As I’ve elaborated in another thread, this is important to keep experts engaged.

I still think you’re underselling the scope of the problem – I have not evaluated the voting records of “thousands” of close voters, so the percentage of problem close voters in my experience is more like 10%, which to me is significant enough to evaluate ways to address it (of which this is just one possibility).

That said, this is a good point:

We want the moderating decisions reflecting the general will of the people using the system for its actual intent, not those that spend most of their time enforcing, and thereby deciding, the rules.

This is a great argument for having content creation being a significant part of the metric used to grant moderation privileges. And I agree, but I don’t think it should be the only part.

5 Likes

Then maybe the right answer is a sort of sliding scale trading off each kind of activity. To get moderator-type privileges, one requirement is that you have to have had, and maintain (not just do it once), some minimum general site content activity. On top of that, you can get the privilege by performing moderator actions or more general site content activity.

For example, the base threshold for editing others’ posts without review might be:

  1. At least 50 questions/answers written, 90% non-negative.
  2. And average at least 5 questions/answers per month for the last 3 months, at least 90% non-negative.

These criteria are to make sure you’re a legitimate user, not just trying to be a moderator without skin in the game.

After the minimum criteria above, you can follow different paths:

  1. Maintain 15 questions/answers per month for the last 3 months, at least 90% non-negative.
  2. Or Have performed a total of 50 accepted edits, no more than 20% rejected overall.

The exact numeric thresholds aren’t the point, and as discussed elsewhere, should scale automatically with site age and activity anyway. I’m trying to show the mechanism of having some minimum content activity to prove you’re a real user (didn’t just come here to lord it over the rest of us), but beyond that we recognize that both providing content and performing moderation activities qualifies you for the privilege.

3 Likes

Personally, under base threshold for this example, I would also have something like:

  1. And have performed a total of 10 accepted edits, no more than 20% rejected overall.

But yes, I like this general principle a lot. Testing for recent contributions is a plus as well. I imagine we’d need to be prepared to deal with pushback from people who say the criteria are too confusing. But this feels like it would result in a better community moderation paradigm.

Is that a problem, though? I’d assume, that someone who has had a lot of accepted edits can be considered “safe” and to know the community norms for editing.

The other case of an active user being neccessarily good at editing is not that sure IMO. That’s something can be seen at Stack Overflow all the time. I remember a lot of answers on Meta.SO explaining that.

It’s not a problem of knowing the site norms. However, it is a problem if we have users doing the governing that aren’t also participants. Ultimately, content is what it’s all about. Those not regularly creating content, at least at some minimum level, will likely get out of sync with the “real” users. As a political analogy, we don’t want someone becoming a king and forgetting what it was like having to actually earn a living.

3 Likes

Is it necessary to remove carefully elected moderators and then just down grade them if they have some problems at home or any critical reason?

2 Likes

At that point, it’s on the moderators to give at least some form of notice. But the scale of inactivity we’re talking here is in excess of a month without word. If someone cannot inform us as to why they’re no longer active within a month, it’s likely their inactivity will extend for several months beyond that, or they are in a very very very rare edge case.

I say “us” - this is probably going to be decided per community, and the community’s moderators are likely going to petition to remove inactives.

2 Likes

@Corsaka Failing to see how. If someone’s not active for a month, how is it likely that their inactivity will extend to another month? Also their reason can be anything critical maybe an unfortunate circumstance that made them un-active. Will that cost them their trust level? I don’t think so.

Also if this is a tactic to keep moderators active then I don’t sit with it. Some unfortunate events can happen in anyone’s life and they can be in-active, how much they want to.

2 Likes

I see it as something like:

  • Inactive for a few days, even a week or two, without any notice - not a problem.
  • Inactive for a month, with notice (e.g., “going to be inactive for a while due to work/personal/family/illness/etc.”), not a problem, but that gives the community the chance to discuss whether to add another moderator or make other changes to adapt.
  • Inactive for a month, without notice. That is when a community has reason to be concerned. Of course, hopefully within that time other moderators or people “in charge” would make some attempts within that time to find out what is going on.
4 Likes

I agree with this and still if they are inactive for a month without notice, we should investigate for ourselves and wait for some more time. Then we should take any decision

1 Like

This thread has turned into a policy discussion. The code will not set policy about inactivity. Each instance needs to decide how to monitor and adjust trust levels. For our instance, I support a manual process administered by the instance admins in collaboration with the affected community. Basically, if a mod goes quiet for a while – and I’m thinking months here, not weeks – someone will try to get in touch, and if there’s no response the team will discuss next steps. Removing mod privs during extended activity and restoring them upon return seems fine. We’ll need to formalize this (I think if it was a long absence, like years, it’s worth getting a thumbs-up from current mods), but we are a long way away from needing this.

Let’s keep the discussion here focused on the capability, not instance policies.

5 Likes

The idea of separation of moderation and Q/A rep / capabilities appeals to me - someone who writes good answers with poor grammar, whose answers are then edited by someone else, seems a good example of one who’d have high answer rep and low moderation rep. I don’t know how these would factor into trust levels, though.

From personal experience, I started reviewing the SE queues for those shiny gold badges, but as I went through them I saw issues that were common to posts and learned to avoid them in my own - moderation taught me things I couldn’t learn no matter how much moar I lurked.

If the separation of moderation and Q/A reputation is adopted, maybe one of the requirements to advance to the next tier of one category would be a small amount of experience in the other? If everything above tier 2 is defined on a per-site basis, maybe you can’t get to tier 3 of moderation until you’re at tier 1 of Q/A, to make sure you’ve got a handle on how to ask and answer?

Users who have strong positive scores in Q/A likely know how to continue asking and answering well, but no matter how good a score I’ve got, I wouldn’t trust myself with new privileges without first seeing the other side - learning more from mistakes than from successes and all that, even if it’s a review of other people’s mistakes.

2 Likes

For example, the fact that Stack Overflow is IPv4-only causes rare circumstances where a new user needs to get 62 edits accepted before being allowed to post a first question or answer. This happens when the new user shares an address with a poorly received user, such as in an office, in a public library, in a school, in university on-campus housing, or in a country where all last-mile ISPs use carrier-grade NAT to put a whole neighborhood or a whole commercial district behind one IPv4 address because of underallocation.

Yes, that was exactly what I was trying to get across. I see I did it poorly. Thank you.