Proposed changes to the trust level system

I’m agreeing with Olin that a policy of receiving a single validated/“helpful” offensive flag sending a long-time user back to square one could cause serious problems.

Say a user shows Jon Skeet-level dedication to a community over the course of several years. Then someone says “Your meta post about me used singular ‘they’ instead of the nonstandard pronoun in my profile. Therefore I’m flagging your post as offensive.” If another moderator agrees, this could cause what happened to @cellio on another site to happen all over again. Even “3 validated recent offensive posts” could cause a problem if a user goes on a flagging spree against another user in order to use the letter of the Code of Conduct as a means to push a personal grudge. At some point, a ratio works better to retain a user with two strikes rather than watching them go rogue.

But under the current proposal, a new user isn’t allowed to suggest even one edit without having first written a question or answer. This will hurt once we start trying to abuse of alternate accounts by implementing rate limits based on an IPv4 address. A user of an ISP that applies carrier-grade network address translation (CGNAT) to subscribers won’t be able to get a question or answer in edgewise. Compare SE’s policy, where a new user in university or in Myanmar or in another hotbed of CGNAT could theoretically get 62 edits accepted in order to be exempt from automatic throttling based on IPv4 address.

3 Likes

What do you think about not applying any automatic treatment to users with a lot of good content recently, as proposed here by me?

I think this is a valid point. This should be changed. Could you suggest some ratios for that (AKA 1 good post = X good edits). I think anonymous editing should be in our software, probably even within MVP.

1 Like

That’s worth considering. I see it as similar to a ratio system.

For comparison, SE reputation considers five edits equivalent to receiving one upvote on a question or answer.

2 Likes

This might be irrelevant but does the trust system have any way to prevent down voting attacks due to personal dislike?
Like making 50 accounts and down voting a question with each of the accounts because I don’t like the person who wrote it rather than the question?
Is there something else that prevents it?

All those won’t be required for every good answer. Reading the question might only take a minute. If it takes much more, it’s probably not a well written and to the point question. Doing on-site research doesn’t happen if you’re reasonably familiar with the site and know it’s not a dup. External research doesn’t happen when you already know the answer. Sometimes good answers only need to be a couple of paragraphs, and only take a minute or two to write.

I have sometimes spent an hour or more on a single answer, but many times, especially for newbie questions, the answer is immediately obvious after reading the first two sentences of the question. In my experience, the majority of answers don’t take more than a few minutes, well under 15 minutes.

The upside to rate limiting answers is slim, and the downside rather severe. There also isn’t much of a problem to solve here as spam can be dealt with by other means, and otherwise there isn’t much of a rush and the voting system can sort good from bad.

Sorry, I misunderstood. Yes, something like that sounds workable. The exact numeric parameters should be tweakable per site.

Whqt about this proposal from me to drastically reduce the rate limit for experienced users:

We have discussed some mechanism, although I’m not sure there was any consensus. Some points that were discussed to address this issue:

  1. 50 accounts would be creating at least 49 sock puppets. That's strictly against the rules. SE has some means to defend against this, and we need that too.
  2. Anyone can downvote anonymously, but that doesn't count for as much as downvotes that can be identified to you. All downvotes probably affect answer sort order, but only "signed" downvotes affect a user's rep or privilege level or whatever. Since users have to stand behind the downvotes that can actually harm other users, abuse will be minimal.

This we can deal with similarly to SE - don’t apply rate limits to IPs by default (beyond “anti-bot” type measures, like “no human could post 1000 answers in a minute”). Only once we see a high volume of abuse coming from a particular network should the system automatically hobble the IP and eventually block it. Ultimately, if we do have to block a giant university CGNAT’d network… that’s their problem, not ours. SE has done it before now.

3 Likes

A bit off topic, but after reading the whole thread from start: I feel it would be easier to follow the various points linked to their proposed change on a merge request on a github.

Maybe an integration between here and github is possible to help parsing things ? (or maybe that’s just a silly idea)

We don’t have a system yet: these discussions are about changes to the proposed system, not to a system that exists and can have pull requests made against it.

Correct me if I don’t get it, but the link in the first post is toward a file describing this system in github.
The changes in the first post could have been a MR on this file and talks about it could have been made linked to specific line/paragraph of the proposed changes.

I find it easier to have the multiple talks threaded with the change they’re about than interleaved in a single thread. Exemple of what I mean here: https://github.com/chef/chef-oss-practices/pull/220

That file is part of the wiki on the docs repo. Wikis are collaboratively-editable documents that exist outside of the normal repository structure, so they can’t have pull requests made against them.

Aww, totally missed it was in the wiki part, I’m probably too used to markdown documents :confused:

@luap42, do you see enough consensus here (on parts at least) to propose an edit? If so, could you add a new answer here with the proposed new text (and separately call out key changes), and we’ll give it the yellow-highlighting treatment and make the wiki edit if no one objects? It looks like there are still issues to be resolved, but if there are improvements that have broad support, let’s at least make those ones so we all have a cleaner statement to base further discussions on, ok?

If you don’t think we’re there yet then that’s ok too, but since you started this thread you’re presumably tracking it more closely than I am.

I think most points are still contentious, however after re-reading this conversation, I see the following possible consensually agreed points:

  • Trust level requirements should take (recent) history into account. This means, that you if you need any number of successful things, that this number must be at least 80% (configurable) of all things of that type. (Example: It wouldn’t be possible to reach a 5 accepted edits requirement, if you had 5 accepted edits, but 20 rejected)

  • You should be demoted to level 0, and have to start earning trust again, if any of these criteria applies:

    • Your post is flagged as spam by N users (where N is a defineable threshold)
    • Your post is flagged as spam and the flag is confirmed by a moderator
    • Your post is flagged as spam by a moderator
    • Three (configurable) of your posts in the last 14 (configurable) days are deleted as offensive and they are more than 20% (configurable) of all your posts in this time

    If the last criterion would apply, but the deleted posts are less than 20%, a priority moderator flag shall be raised.

I think it is also commonly agreed on, that editing should be decoupled from trust levels and its privileges should be given to users, who have a lot of approved suggestions (and to moderators (=TL5) of course).

4 Likes

For this use case, the limit could be 10 comments in a 24-hour period excluding these:

  • Comments on your own questions or answers
  • Comments on answers to your questions
  • A single reply to a comment that pinged you

I’ve noticed that some SE users get off to a rough start in their first few posts, especially when their first experience with its voting involves trying to be the fastest gun in the west for their chosen language tag’s new question feed on Stack Overflow. The users we want to retain are those who realize that their early posts are irredeemable and learn from mistakes. This meshes with the “(recent)” in “take (recent) history into account.” So would it be better to count only live posts or both live and recently deleted posts in this TL calculation?

Some users have intermittent access to the Internet. For example, a transit passenger may live in a city that does not provide Wi-Fi to passengers and not subscribe to a cellular data plan that allows tethering a laptop computer. These users download questions to answer, go offline, prepare answers, go back online, and paste them into a web form. To make Codidact offline-friendly, consider implementing rate limits on time scales shorter than a day using a token bucket or sliding window algorithm: 2 questions per 60 minutes, 4 posts (questions or answers) per 60 minutes, and 4 comments per 1 minute.

3 Likes

I agree, limiting it to the last X days might be a good idea. I’d propose X to be 180 by default, but configurable. However, two bad posts only block you, if you don’t fulfill the other one (five good), but have good edits. So it’s not a problem to get past this by learning from mistakes.

1 Like