I’m going to take this opportunity to remind everyone to please (a) stay on-topic, and (b) be civil. These are both in our guidelines, specifically these three points. Please keep posts to the topic of figuring out what makes this better than SE, and remember to respond to posts not people.
Wow, this thread got gigantic. And I’m definitely hoping we’re approaching some sort of conclusion by now…
Some comments - most are short, unfortunately not all:
tag badges are a nice feature in that respect.
While the problem indeed exists, I find that is rare. The exception rather than the rule, I’d say. At least on the technical sites where I participate(d) the most.
Here, I might agree with you in part. Meaning: I, personally, still believe a core “gamification” system to be necessary to the success of the platform; however, the mechanism of implementation can be improved in a number of ways. None of them major when taken in isolation, though. In essense, my opinion is that both a score and badges are required¹, but the precise way they are awarded can benefit from a revision.
¹: Otherwise, what you end up with is something that looks an awful lot like Quora. I’d hate to end up with something that looks like Quora.
Can’t that be achieved with less drastic changes to the core model, though? We can think of each shortcoming individually and evaluate simple, nondisruptive measures for each of them. If you go over some of my first proposals on Discord, you’ll find suggestions to discourage duplicate questions and answers, which included reverting all reputation gained (and we can improve by going further: repeated “offenders” - those who display a pattern - could be penalized by actively losing reputation by engaging in duplicates); you’ll find suggestions that facilitate question migration and to notify downvoters when a post is edited, so that downvotes can be used more freely; and one suggestion to minimize the “fastest gun in the west” problem (I can already think of improved ideas in that respect). Some problems are not technical but social, and can be mitigated in the form of policies, such as those that requires questions to show some research effort and to be useful - a relative concept, admittedly, particularly considering that a question can be useful for OP but not so much for others, but on the other hand I one can certainly identify when a user is asking something just for the sake of rep points.
There were at least a couple other great ideas by other members, and I can’t seem to find them now. They should come up again at some point, though. I was wishing for the @Contributor (/docs) folks to have gone through the archived channels and transferred the greatest suggestions over here, but it didn’t turn out that way. Not to mention totally different ideas that we have yet to contemplate, such as novel review queues for quality control (with some form of penalty in case of repeatedly obtaining poor reviews). Of course, with score penalization comes site privilege penalization as well. We use reputation as a currency of trust; you can gain trust in the platform by gaining reputation, and you can also lose it.
Well, just a friendly reminder, as you’re probably aware by now, that there is strong disagreement regarding both assertions (first two sentences)!
Remember 3-4 years ago? The landscape was vastly different. We had good willing people doing maintainance work, and even with the platform leaving much to be desired - and it still does - the community managed to have a high quality repository of knowledge. What happened since then? Not only the platform didn’t improve, but positive engament began declining as well. The way I see it, this is due to a sentiment that goes more or less like this: “If the powers that be don’t care enough, why should I?”
It’s a big shame, really.
Maybe we can follow the strategy used by Math SE and MathOverflow, for different branches of knowledge. One site aimed at highly skilled professionals, another one for beginners. Or maybe we could separate stuff on the same site (I suggested a
beginner “meta-tag” here; there are other possibilities).
In any case, “just do X for me” will never make for good quality questions. Anyone who expects random people from the internet to dedicate their time in helping with their issues must always be willing to put up some genuine effort themselves and must demonstrate that.
And as an addendum, we should respect the fact that everyone deserves to have their questions hosted in our site(s), no matter how localized or “newbie” a problem might be. That is, as I just said above, provided the question passes all quality filters (not duplicate, shows research effort, is a genuine problem or one that a select subset of users will find interesting and useful, even if that is not the majority of users).
This specific brainstorm idea, in my opinion, is too complicated and potentially creates more problems than it solves. I advise against it.
(end pt. 1)
The docs people did not sift through all the Discord stuff because there is way too much and it’s scattered in way too many places. We discussed having people who are so inclined culling that and linking relevant stuff in one place (the doc-todo channel); a few things ended up there and we handled those, but I think at this point, anybody who wants to bring something here from there should just do it individually.
It had seemed to me that we were reaching consensus to not have a single reputation number, but your message here talks about rep. The older Discord ideas would have because that was before the “do we need rep?” discussion; could you clarify if you are still advocating for a rep number?
I agree that gamification of some sort is important for enagement. That’s why I suggested breaking a raw rep number into various “tracks” and showing that information where relevant – # answers given, top tags, # edits, etc. Maybe (long-term) what’s shown on the user card is context-dependent – when an answer is from somebody holding a tag badge for a tag on that question, show that (for example). I don’t have a full proposal and it’s not MVP. I guess for MVP we’ll just have name and gravatar showing up on posts, though much more info can be shown in the profile. We should work toward showing more on posts. But if we compute and show a reputation number now, we’ll never be able to get rid of it later – so let’s not show it now.
Well, let us please continue this discussion on Discord (or open a new topic here and ping me!) . If you go with the second route, please mention our existing conversation there to make things easier for me and everyone to follow up.
I can totally understand that, believe me. From my part I will be transferring some of my own personal proposals, whenever appropriate or necessary.
Didn’t notice; there’re 110 messages here and I’m currently at #82. Still catching up.
I don’t necessarily advocate for a single reputation number, not strongly anyway. Perhaps we could do with discrete stats, like “x answers total, y accepted answers, z most voted answers, n helpful flags, r useful revisions” etc. Perhaps. At least for starters. Should we end up chosing to have a rep number, we’ll have many months to come up with a clever, sensible and fair method to calculate it based on these metrics. This can even be done post-launch. And it can be revised.
One thing is for certain, though - when it comes to privileges (system trust to perform actions), a single number would surely make things so much easier.
Whatever we end up deciding, the way such privileges are to be implemented must be defined and the specifications should be documented as soon as possible. It doesn’t have to be a complete, much less a final specification. Just something to get us started on dev work
Hah, similar thinking. I have yet to see these suggestions by you, sorry.
In my opinion that is debatable. But I can’t argue that right now.
This is a real concern. Without a single number, we have to manage multiple paths for someone to get to privileges. There are likely to be:
- People who mostly ask questions, though I suspect after enough asking they will start answering if they stick around
- People who only answer. That’s me on some SE sites - I find answers to my questions because someone else already asked the questions, but I stick around and help answer others
- People who mostly edit, comment and in general “help” but only occasionally answer and rarely ask questions. These people work their way up in SE based on a bunch of +2 edits - on SO that’s “suggested edit is accepted: +2 (up to +1000 total per user)” - i.e., you could work up to a pretty high privilege level without asking or answering a single question.
So if we do something other than a single Reputation Number, we need to figure out how to let any active enough user get to reasonable privilege levels.
Joel had some regrettable ideas and this would be one of them IMO.
Again, I can’t help but be reminded of Stack Overflow Documentation. The awesome idea that endured months of poor quality contributions and still couldn’t manage to get rid of that culture after a few iterations - and ended up failing catastrophically. Not saying we can’t do better, but hey, we can also do better on top of something that was already better to begin with.
- The asker is the one contributing the question though, and supposedly has a genuine problem at hand and we should consider helping both OP and future readers.
- The recent changes were made for the sake of advertising traffic only. Terrible move. It was fine as it was before.
Yep, this is an extreme position. I hope we can aim for some middle ground!
Extremely good point!
Maybe not, but why differ for no good reason? For the sake of keeping things simple - and intuitive, I advise we display scores on both questions and answers likewise.
That isn’t a realist scenario at all… this simply does not happen as described.
Like we already do on Stack Overflow (and many other SE sites)?
SURE! (it works, is all I’ve been saying).
(end pt. 2)
- vBulletin has a reputation system that doesn’t display a score number.
- Discourse has a trust level system that doesn’t display a score number.
- It is possible that both of these rely on algorithms that have internal trust score numbers, they are just never displayed. It is also possible that these algorithms simply take into account the various stats that would compose a trust score, instead.
It can be done. Easier or harder, but possible.
In this thread, we seem to have agreed on what we’re trying to build, albeit not so much on one specific aspect - the displaying of a public reputation score. It is probably time to declare consensus on the former. Regarding the latter, we can open a vote, or even another discussion thread. We can do that now or defer it to later months.
As for me, personally, I think I’d prefer the system to have public reputation, but this is something to keep in my mind for the weeks ahead. We can definitely get some ground work started without that.
We need to work all this out, of course. But one way to work on this is to build a schema that supports:
- Metrics of various “events” (Q/A/Comments, Upvotes Q/A/Comments, Downvotes Q/A/Comments, Edits of other user’s posts approved, update profile, take the Tour, whatever)
- Multiple Reputation Values based on various metrics (which if there is only one Reputation Value then this is like SE, but could be 2 or 3 or more)
- Privileges granted based on one or more Reputation Values. (and if there is only one Reputation Value, this is like SE - but with multiple could be that you get “vote to close privilege” if you have a Q-based Reputation 100 or a A-based Reputation 200 or whatever)
- Badges granted based on one or more Reputation Values or Metrics - e.g., Bronze/Silver/Gold “Q badge” at 500, 1000, 2000 Q-Reputation; “Editor badge” at 100 Edits; etc.
We build a flexible “Black Box” that does the calculations for each user whenever something changes. This would allow for pretty much any of the ideas that have been proposed and lets us move forward with the rest of the design while still working out the “best” reputation/privilege/badge system to actually use. It would allow total flexibility (since it is all database-driven) for changes system-wide or community-specific.
A post was split to a new topic: Handling wrong/outdated content
This thread has become really, really long. Also many posts here are also quite long. Please restrict responding to this post to responses, that really answer the question:
What are we trying to build?
Please reply here with broad, general suggestions about what our site should be. Do not reply with specific, detailed suggestions for individual features. They belong into standalone posts with the tags #mvp (for features that should be required in MVP) or #non-mvp (for other features).
Is there any proof it never has in any situation on any site in the SE network? I agree the way I contrived this particular case is unlikely and may never have happened, but it is a possibility.
I went extreme as the quick simple example, but does it work better if the +25 are from users with 10+ years of professional experience with the topic on hand in multiple environments with multiple projects versus -15 from users with less than two years of experience and have worked for a single company on a single project? Much more possible situation to occur, but also far longer to type and read.
Nevertheless the point I was making still stands; without a statistically significant number of votes to offset “odd” situations like those proposed, the actual vote count does not provide actual relevant data to the end user as they cannot know the “value” of those votes. They are simply left with the potentially wrong conclusion that more votes is better or that posts without negative votes are better than those with negative votes (if they can check).
- A free open-source customizable core of a Q&A platform which anyone can adapt to their own needs;
- A rival/friendly and better than SE network based on a customized version of that under non-profit, quality content oriented government.
The below in short: I would wish to see something build that allows better ways to cross-link questions and answers.
StackExchange has this by means of duplicate (which is more like closing the question) or by adding an overview of related and linked questions. But, I feel that it must be possible to do it better.
I wouldn’t be against any brainstorm idea before the brainstorm is over. But maybe I missed whether it had already been decided ‘what we’re trying to build?’ My particular brainstorm idea was related to Bertieb’s “we should welcome useful content from anyone”. If that is desired then we need ideas about how to make that workable (mixing MathOverflow and math.SE ; is that gonna work without any technical changes?).
I do not believe that there needs to be a lot of technical changes for this particular brainstorm idea:
Already now StackExchange is linking questions. One can mark questions as ‘duplicate’ and then the question becomes linked to the previous question. The downside is currently that those duplicate questions are also closed, any activity on them is stopped, and there is no link back from the older duplicated question.
Some softer way of linking questions together might be useful. Or at least, on the statistics page I encounter many questions which seem a lot the same (very often it is the same concept/topic but in a different setting and not different enough to become duplicate) and it would be nice to have more ways to bundle them (just some additional duplication/copy/linking option) rather than having them swimming around separately.
What this would potentially achieve is that a lot more questions will get attention and will work synergetically together instead of sort of ‘competing’ against each other. (On wikipedia: when initially looking for a single question/topic, then I often end up clicking on several links, a browser window full of tabs, and me spending several late hours to read it. On StackExchange this never happens)
Another example from statistics.SE : There’s an option to tag questions as homework questions. This provides a way to treat different types of questions on the same site. But it could be organised a bit better. Maybe not necessarily hierarchical, but… you have to come up with something when you wish to make it work for a wider range of quality/interest levels.
So that is the hierarchy I was thinking of: to have the possibility to have homework questions and link them to some canonical question; this would be just coupling two already existing functionalities on StackExchange. So technically it might not need to be so difficult. I agree with you that it might be more difficult for the community to organize the posts/questions, but this also depends on the goals of the particular community (personally, I would have little interest to grow the number of questions, but instead I would wish to improve and organise the current questions and answers).
I would agree that the suggestion I made about hierarchy may not be the best. But is there something better? Or… is the solution to do nothing different at all?
To get back on topic. Do we want to build Q&A site that is a vanilla copy from Stack Exchange / Stack Overflow? (which I believe is only suitable for the mono-culture communities and does not allow a mixing of “content from anyone”) Or do we want to build a more modern adaptation that improves SE/SO flaws?
It is not so clear to me in all these dozens of posts (also in other topics) in which directions we wish to deviate from Stack Exchange / Stack Overflow? Obviously there is the management issue and related stuff like licensing; and we need to discuss ways to import and export data/content. But are there other ways in which the site is ought to be different? Those other ways are being discussed on a technical level but it is not made very clear whether is actually gonna be necessary. When the Codidact community is not clear about this then a lot of the discussions ends up long, repetitive and confusing.
Maybe we should quickly get some clarity about potential users such that these discussions about features can be much more streamlined (goal/user-oriented; rather than hypothetical)?
Don’t forget why SE has the closing mechanism in the first place. This is to that the resident users don’t have to keep answering the same thing over and over again. Closing similar questions and pointing them to one canonical question is very useful. I have written questions just for the purpose of them being the canonical question we point others to after closing them. It saves a lot of work. By the way, these canonical questions usually get highly upvoted, so I’m not the only one that appreciates them.
Again, folks, keep the resident users and experts in mind. In general, you seem to be too focused on the users asking questions. You have to make life easy and rewarding for the experts, else the site will devolve rapidly.
I’m not sure what you intended here, but I’m worried how this might be interpreted.
Yes, we don’t care who writes good content. Content stands by itself, regardless of author. However, that is not an invitation to allow any post, regardless of site rules for topicness, technical level, minimum research to be expected before asking, etc.
Each site needs to be able set their own rules in this regard, and then enforce them ruthlessly. It doesn’t matter who posts something, but what is posted matters a lot.
Unfortunately, we often see the what and who blurred, especially when people want less stringent rules. At least on SE Electrical Engineering, we’ve had way way too many rants about how newbies are ill-treated. Bad questions get rough treatment, as they should. Most bad questions are written by newbies. However, that does not mean newbies are ill-treated. There are plenty of newbies that post good questions and get good responses. It’s not about the poster but the post.
So yes, we should welcome good content from anyone, but we also have to deal with bad content in the most expedient manner before it drags down the site and burns out the resident answerers. Let’s make sure that welcome useful content from anyone doesn’t turn into “save the poor newbies” mentality or any other excuse to be less vigilant against low quality.
I agree with the pointing to canonical questions. But I do not agree with the closing of the question (or at least a reasonable sized group on stats.se seems to think so).
A lot of questions on stats.se are applied mathematics questions. Such questions stand on their own, yet they may relate to each other. For instance, there are many questions about finding the maximum likelihood in some way or another. Sometimes several relate to a similar concept, and for this case you could often direct them to some more abstract or more general canonical question. However it is not entirely useless to have those more pragmatic worked out cases.
I do not believe that closing such questions would do justice to reasonable questions that may not be fully answered by the canonical example.
Otherwise, we might suffice with only a couple thousands of general questions like ‘how to fit a curve?’ or ‘how to detect and remove outliers?’. I do not believe that this is a good way. The power of the SE sites is that the questions are very applied and from real world problems rather than these idealised questions types.
Or take math.se as example. Often questions boil down to using a particular technique (which is reoccurring in many questions). But the challenge is to find which technique to use. You can’t refer all those questions like ‘what is the integral of … ?’ to a canonical answer like ‘how to integrate?’.
Bear in mind, folks, that a lot of this sort of thing will be determined not by the software, but by site/community policy - in which case it’s not a question of what we’re trying to build, it’s a question of what community we’re trying to foster.
What we’re trying to build remains, as it always has been, an open, community-focused Q&A platform, and we’ll build the software to support that aim. How we handle different kinds of content from different people will be down to community policy.
Not entirely. The software has to include the mechanisms to carry out that community policy. The ability to close questions is one of those.
Experience has shown that with a large enough user base, there are always a few that give direct answers to homework questions, answer really bad questions, or otherwise can’t resist looking smart even at the expense of the site. Closing questions is necessary to prevent these people from hurting the site in the long run. Those that post bad questions must not get the desired result. Otherwise, there is no cost, and they’ll keep doing the same.
I intended to refer to useful content; that which is off-topic, blatantly duplicate, spam, rude ranting is in my view not useful for QA.
As for the ‘anyone’ part; there had been some discussion about being expert-focused, expert-driven, for experts by experts. While such things can produce valuable output, and I can definitely see potential value in such a proposal in some areas, the software we build and the wider community we surround it with should not cater to that exclusively.
I think this distinction is a useful thing to bear in mind, in the same way that Codidact is the name of the software, not the (or an) eventual community site.
However, while fostering is a good word to describe gathering a community; ‘building’ is good as well. We’re setting in place limits on what the software will achieve, definitely for MVP and in some cases perhaps for the long term, if not for good. Some folks would be delighted with an identical clone of SE’s software, absent SE Inc. Others have a more radical view of how to shake up QA.
This topic has gathered a wide range of views which fall on the spectrum of community policy vs functionality determined by software; and that’s very useful. It behoves us all to know the rough direction of travel on both. If nothing else than purely from the point of view of engagement and contributions-- in the same way it would be unfair to expect someone with absolute no knowledge of C# or similar to write our backend; people want to create the kind of QA site they feel is beneficial.
I’m curious about something.
…what’s actually being built? Are we building software for people to run on, a central site for people to work from like SE, or something else entirely?