I, too, don’t like the idea that bounties somehow make you less reputable on SE. One of our top users on one of my sites shows up much lower in the list than warranted because of great generosity with bounties.
I wonder whether we should allow anybody to award public, attributed kudos, with the catch that we’ll also show the number you’ve given out. There are people from whom getting a kudo at all is very nifty (ooh, Jon Skeet liked my answer!), and if somebody who rarely gives kudos gave you one, that’s pretty spiffy too (no names in this example; fill in your own). We’d probably want a low daily limit so they continue to mean something, and so people don’t feel social pressure to sprinkle them all over the place.
They shouldn’t be anonymous, because the whole point of the (e.g.) Gilles seal of approval is that we know it came from Gilles. Use it on stuff you stand behind.
By making them available to everybody, we include the people who have “site-local” reknown but not necessarily in one or a few tags, and also to people who are new to the site but are known from outside. It can be hard to get passing experts to stay and engage sometimes, but this would be something easy that would increase their visibility.
I like the idea of attributed kudos, but the downside of showing how many you’ve given out is that some people may try to game / maximise it, i.e. give the maximum number of kudos every day, even to posts they don’t feel deserve it.
Or people may judge you for not giving enough kudos (see also the whole acceptance rate SE fiasco) or for giving too many.
Hmm, yes. I’d forgotten about acceptance rate. So maybe we don’t show the number directly; site regulars will get to know who’s kudo-generous and whose are more rare, and visitors probably won’t care very much.
The biggest flaw in the SE system by far, is that it mixes up domain knowledge with moderator suitability into a single mess called “user trust”. We need to get rid of that system.
A good domain expert doesn’t necessarily make a good moderator, or vice versa. Votes on answers should count towards domain knowledge reputation. Doing tasks like edits, close votes, flagging, reviews should count towards moderator reputation.
With this in mind, I think your proposal is quite sound, much better than what SE has currently. Maybe just three categories though?
Site experience. Everything you do counts for this - like the SE rep system.
Domain experience. Votes on answers count for this.
Moderator experience. Edits, reviews, flagging etc.
Maybe come up with some fancier names for the categories though.
The only real discussion about User Trust in this thread seems to be that it shouldn’t be linked to kudos/rep. The majority of these other replies would fit best in MVP: Reputation. While I understand the mention of a reward system, we seem to have strayed off of that topic also.
Has access to all moderation tools that we come up with in the future.
Trust Level 6 - Council
This is a temporary trust level granted to review council members when dealing with disciplinary actions. This could also serve for the developer-tier of trust, assuming developers don't just have console access to each site.
May adjust Trust Levels of all TL2-5s.
Please leave any feedback you may have. I’m happy to make changes based on people’s opinions, as I’m not experienced in Q&A moderation, so I don’t know what might be appropriate points in which we promote/demote people.
This seems low to me, but I’m not against having a capped limit. Perhaps it could be 5 comments on a question/answer that is not theirs, but on their own question page they shouldn’t be limited because that may be frustrating if the user is trying to clarify something or answer questions that had been asked in the comments by other users.
I like the idea of a trust level system - it’s granular enough that it works well for MVP, and it leaves room for expanding to multiple “tracks” at a later date for different types of privilege (posting vs. reviewing vs. moderating vs. editing, etc), if we want to do that.
Most restrictions should be removed for OPs on their own posts, whatever trust level they’re at. For example, the 5 comment limit at TL1 should exclude comments on your own posts.
Tracking time spent on the site is… well, not difficult to do, but it’s unusual and I’d consider it unnecessary extra work, especially for MVP. Restrictions should be set so that they by their nature require having spent time on the site - e.g. the “5 well-received questions” at TL2 necessitates spending a while on the site, because asking 5 questions takes at least 3 days based on the restrictions for previous levels.
This feels low. 10, maybe?
This is high, especially if you’re looking at these as AND requirements rather than OR. I’d suggest default values of 100-150, 50-75, and ~15. We need to strike a balance between ensuring people have the experience to use the site effectively, while also not making people work too hard to access extra tools, otherwise we don’t have enough people moderating to make the site work.
These values should probably all be made configurable items, rather than hard-coded, so that we and anyone else using Codidact can chop and change them as we want.
Temporary locks I can get behind, and perhaps we want to consider something akin to protection on SE as well, perhaps at TL3.
Blocking or suspending users should be left to full moderators - they’re the people who the community have voted for. Users at TL4 are, effectively, still “regular users”, and letting unvetted, unvoted-for users suspend even temporarily doesn’t sit well with me.
Why? What’s the use-case for this? We shouldn’t be granting exceptions to the requirements for the next trust level (i.e. no promoting up), and if users are being a problem we should be using suspensions or disabling specific feature access rather than changing trust levels (i.e. no demoting down).
As @luap42 said, we should not be hard-deleting posts. The only people with the ability to hard-delete should be those with database access (or possibly through a “legal/compliance/maintenance tooling” UI separated from the site). Speaking of which…
We don’t need this one. Rather than having a 6th trust level, we just need an “admin” flag on user accounts which serves as a total override - if the admin flag is set on your account, you can access everything and all restrictions are removed, no matter what trust level you’re at. This can be set on staff and developer accounts (i.e. those running the site), and temporarily set on review board members when acting in their enforcement capacity.
That was the intention for TL3+ - customizable to the website you want.
I’ll add these to TL5 - could you expand on them slightly?
That was sort of the point of TL6. I think integrating it into the TL system would be much easier than creating a full new indication of administration.
This is great! We can quibble over some of the specific numbers, and I share @ArtOfCode’s concerns about raw “time on site” being a factor, but those are details. Let’s do this.
Constants should be adjustable per-site, not just per-instance. I’ve been on sites where it would take months to be able to raise a hundred or more (legitimate) flags. We’ll want communities to be able to scale flag and review requirements with the activity of the site.
The only tool we have to prevent problematic behaviours on SE is suspension. It’s a very blunt tool. We can message users, but ultimately if they don’t comply we can’t just disable their ability to comment or review; we have to suspend them. I’d like us to be able to disable specific abilities or groups of related abilities in Codidact.
On the contrary - creating an extra trust level requires integrating into the requirements system, but the concept of a specific access list doesn’t fit very well into that system. Adding a single boolean field to the user table, on the other hand, is dead easy and can be integrated into permissions checks very easily:
public class User {
public boolean HasPrivilege(string privilegeName) {
return this.IsAdmin ||
Array.Exists(this.Privileges.Select(priv => priv.Name),
n => n == privilegeName);
}
}
This seems overly restrictive on answers. Most bad content we want to avoid area bad questions. Yes, bad answers definitely happen, but not as much. They are also easier to deal with as they are rated by the community relative to other answers.
Throttling back what you can ask of the site until you’ve shown you can ask without damage makes sense. But, I’d be rather pissed if I came to the site, see a bunch of questions I know I can write good answers for, then have the system prevent me. Maybe throttle back answers only as long as there aren’t any “negative” (how ever that is rated), until you get the total number of answers up enough so that we can rate you otherwise.
Again, questions and answers don’t have same quality issues and potential to damage the site.
Once the total number of posts gets high enough, you should be measuring the average or ratio, not absolute numbers. 5 well-received questions plus 10 crappy ones is very different from 5 total questions, all well received.
This makes sense from the perspective of someone who just wants to answer, but in overall by the numbers sense bad answers are a dime a dozen while bad questions that are so irredeemably bad that they can’t be fixed by a little editing and need to be deleted are much more rare.
I mean if you look at the deleted answers underneath a protected question, there’s a lot of junk down there.
If we are worried about overall quality, there is as lot more bad answers than bad questions.
Voting can push bad answers out of immediate view, though. Questions, on the other hand, are there for everybody. (We’ve been avoiding votes and scores for questions and thus can’t use that to affect visibility. Maybe we’ll decide to do that later, but we haven’t felt we needed it yet, especially for MVP.)
I agree with applying the limit just to questions – what you can ask the site to do for you – and not also to answers. We also need to talk about rate limits in general to counter aggressive spambots, but the rate limits we use shouldn’t affect normal, constructive use of the site by actual humans.