On the machine learning vs. different topics part:
Taking the Mac vs. Windows issue as an example, let’s say you are generally a Mac user and therefore the AI optimizes your view for that. But then, you have to use Windows at your job, and you happen to have a question on it. So you post on your catch-all site, which has learned that you value the answers from Mac supporters. Now you will prominently see the answers saying “don’t use Windows, Mac can do that much better, here’s how.” Which may match your sentiment as Mac user, but certainly doesn’t help you with your problem as you simply don’t have the choice to use Mac.
With separate sites for Windows and Mac, you’ll simply ask on the Windows site, and know you’ll get competent answers from people who like Windows, and are interested to solve your problem, without interference of a machine that learned that you want Mac solutions.
In short, relying on machine learning takes away autonomy from you, as you no longer decide yourself which content you see, the machine does for you. Sure, it does by predicting you, but when it predicts wrong, you’ll have a hard time fighting those decisions (if you even notice that your bad experience is because of the machine learning, and not because the community on that site generally is anti-Windows).
You surely have heard the term “filter bubble” before. What you are proposing is basically the building of such filter bubbles. Filter bubbles are harmful, exactly because you don’t notice them. It’s not your decision to see or not see something, it’s the machine’s decision.
Saying the site should post your questions to a particular group just because that is the one you are most active in is just plain silly. Let’s extend your example and say you are now remodeling your bathroom and post a question with regards to “how to replace a faucet.” Would you also expect that to get posted to Mac users? No, because that makes no sense.
Questions should be evaluated by the site for the content of the post when it is categorized. So your primarily Windows question will get “Windows” based tags and be seen by users who want have been determined to want to see Windows content. You aren’t likely to get “don’t user Windows, Mac can do that much better, here’s how” from those users.
Hmm. I get it, you believe machine learning is bad. Machine learning is all around you already in some form or another. Most of it is built on older frameworks, but any site or service that provides your recommendations or suggestions is already doing it.
The key to doing it well is to allow the users to tune their own “views” of what they want presented. Pick any of the major streaming music services and you will see this in action. Is it always perfect? Certainly not and you just need to give feedback when it isn’t. And when it predicts right (if you even notice that your good experience is because of machine learning), you get the benefit of finding other topics of interest to you, meeting more people that are interested in the same things as you, etc.
Then again, even when I am a member of community X (let’s say Mac from your example), when I visit the “Mac Users” specific site there is still plenty of “noise” that has no interest to me. The filtering there isn’t perfect either, and often less so as the filtering is purely done by human knowledge and thought processes, which are far from perfect. And there is nothing (besides maybe the “Hot Topics” list) that brings me anything outside of the Mac “filter bubble” I have placed myself into.
You can make this argument about any sort of filtering you choose, even if it is filtering you choose for yourself. And clearly the wrong filters can be dangerous, but filtering is a daily part of life for the human condition. Humans take in far too much information for our brains to process every detail all the time
Filtering happens and most of it isn’t your own conscious decision. Your brain does it. The media sources you pay attention can do it. Your cultural group can do it. Where you live can do it. Your circle of friends, your workplace, your [insert whatever you want here] can do it as well.
The smaller you make your potential source(s) of information, the more likely you are to be putting yourself into a filter bubble. Did you not notice that topic specific communities also form their own filter bubble as well? If you didn’t notice, maybe you should be concerned.
It occurs to me that one could build your ML-based approach on top of our community-based approach. The reverse, however, would not be true.
Personally, I see enough of what Twitter does to feeds (and what I understand Facebook does to them) to not want that kind of filtered view of the content I’m looking for. Plus I value the community aspect of the smaller communities. But there is no barrier to somebody building an uber-site with filtering on top of Codidact’s communities. For that matter, somebody could stand up an instance of Codidact with only one community and a large pile of topics, just as you’re proposing. Either way, somebody would need to inject the ML parts.
I’m new here, having only found out about this effort accidentally a few days ago. I’m trying to catch up with what you all are doing. Most here seem to be thinking from the point of view of a developer or a ordinary user that wants to get questions answered.
I have a different perspective. I’m the expert that you need to cater to, else you won’t have a site. See my reputation on SE’s Electrical Engineering. I got fed up with SE a year ago as it got ever more and more PC.
You can make a site where everyone’s all so nice to each other. New people will come because they get greeted by fluffy bunnies hopping about in the pretty forest. It’s a Newbies Paradise. They can write sloppy English, use text-speak, not bother with all that annoying punctuation, and if someone makes a slightly snide comment how the answer is in bold type right on the first page of the datasheet, they can complain and get that person chastised. What a deal! No need to read those silly datasheets!
The problem is that those cute o-so-polite bunnies don’t actually know anything. They’re nice and welcoming, but can’t answer many questions. Those that could have provided the content have been alienated by the drivel, low signal to noise ratio, and inability to keep the place clean and focused.
The “worker’s paradise” model of Q&A communities just doesn’t work.
You need a model that keeps the experts engaged and fulfilled. That means you need to understand and cater to their motivations. Several things follow from that:
Providing good content has to be rewarded. SE did this indirectly with rep. It's not the rep directly anyone cares about, but what it gets you and what it means. It gets you the mechanical ability to do certain things on the system. It also gets you a certain status among the users. Don't underestimate that. Experts need to be recognized as such in some form or another.
The experts need to be given some freedom and leeway to keep the place clean. Not all users make a positive contribution. Any one site isn't for everyone. It's OK, in fact it's necessary, to make the posters of negative content feel unwelcome. Of course that must always be about the content, not the user, but telling someone their question is stupid, with proper explanation, should not only be allowed, but encouraged.
The experts value their intellectual property highly. Authorship matters, if only to them. They are there not only to teach, but to be known for the teaching.
Recognize that different user segments are there for different purposes. Newbies want their questions answered. Experts don't ask many questions, but want a good forum for teaching, new questions that make them think a bit and maybe explore some area they could use a little more knowledge in, and to be respected by the community, particularly the other experts. Others in the middle want to get occasional help, but also want to feel good about providing occasional help.
In the end, none of this works without a small but core base of experts that are “resident”. Therefore, start with meeting those needs, and the rest will follow.
This creates a strong trend of optimizing for the ego of the participants rather than for the benefit of readers. Witness the anti-edit trend. Joel Spolsky's vision for Stack Overflow was that once a question had multiple answers, someone would edit the best answer to add information from other answers.
Then why would anyone bother to volunteer their expert time to answer questions? You expect people to cough up answers, but won’t even given them proper credit!?
But in practice, try to do that and the author of the edited answer will complain that the edit denatures the answer
Right. I’d be pissed to. Without proper credit, there is no reward. Think about why who is listed as an author of a research paper is a big deal. The information would be the same if the authors weren’t listed. But, that does nothing for the other side of the equation, which is the information provider. Why should they bother? And please, don’t give me building the knowledge of humanity or some such fluffy bunny nonsense.
I answer questions because I like teaching, sometimes doing so makes me think about an issue in a different way, and to get some reputation (the real kind, not fake internet points) for knowing something about the topic. Note that actually solving some dweeb’s silly-ass problem isn’t on the list. That’s a bi-product. It can yield some satisfaction, but it is far from the main motivator.
Voting does not always indicate correctness.
True, but it generally works well enough on SE. You need some mechanism for others to collectively agree that someone consistently writes good posts.
I would disagree, the reverse would be easier and more natural. Since it is almost Thanksgiving in the US, it should be easier to cut a piece out of a pie and only serve that piece than it is to take a bunch of pieces and turn them back into a whole pie.
Trying to reverse good machine learning backwards onto a platform that isn’t designed for it from the ground up will simply result in more of the type of examples that you don’t want to experience again. To do good machine learning, you need to design with the intent from the start.
I understand the general reluctance to consider machine learning. Most of the experience users are aware of having with it comes in the form of older platforms that have utilized it (often poorly) to some degree or another while tied to their legacy data structures/code. Where it works well, most people don’t even notice it unless they are trying to be concious of it.
Anyhow, it is clear based on the responses and likes that this group is unlikely to give much considerations to concepts not currently in the group vision. Since the trend appears to want to go to individual topic based sites and this would only add to the list of sites I need to visit on a regular basis rather than simplifiying my life, it doesn’t appear to be the project for me to commit much time personally.
So I will bow out of at least this discussion and possibly the project as it doesn’t look like it will meet my needs/ideal. I do wish you all well and hope your project does well. I will likely continue to lurk from time to time and may chime in if I feel a need.
I will leave you with one thought. Do not aspire to simply be better than SE; aspire to be a completely unique and better Q&A platform and community. I fear too many posters here are more focused on improving what they had at SE than creating something new. And a somewhat better than SE clone/community doesn’t have much chance in the long run.
I believe you overestimate what machine learning can do. Your faucet example is off-limits because there are unlikely to be any questions that mention both a faucet and a Mac computer, but there should be plenty of questions that mention both Windows and Mac (“I liked this on Mac, can I get this on Windows?”, ”Does an equivalent of this Windows software exist on a Mac?”).
You are aware that current machine learning is not strong AI? Heck, there are some humans I wouldn’t trust to get this right!
No, I don’t think machine learning is bad. Machine learning is a great way to automate complex tasks. But there are certain tasks I don’t want to be automated, and for those I also don’t want machine learning.
Now a good use for machine learning on codidact could be a tag suggestion mechanism. That is, the algorithm looks at what humans do with the tags, and then when someone writes a new question, it figures out what tags most likely apply, and the human asking the question then has the option to accept those tags, or to change them before posting. New users would likely take the tags as is. If the tags are wrong, experienced human users will notice it and retag, which gives the algorithm more data to learn from. Experienced users will also review the tags, and change them right on the spot if they don’t fit.
The point is, unlike in your scenario, the user is in control. And the machine simplifies it, but does not bypass the user.
Streaming services form your music taste just as much as they react on it, if not more. It is a known psychological effect that the more you listen to a certain type of music, the more you like it (the same is already true for music radio stations). You may consider that a good or a bad thing, but it certainly means that music streaming services are a particularly bad example. Also, music streaming services replace radio stations where you have even less influence on the music you hear.
Also, music streaming services almost certainly tag the music style by hand. The machine algorithm doesn’t figure it out, it uses that information to
Anyway, I don’t usually use music streaming services, and while I do listen to music on YouTube, I rarely use their auto-generated playlists.
This is different because I know I’m on a Mac site, and therefore I know that everything I see will have a Mac bias. And I know that I can easily see a different view by going to the Windows site.
With your machine algorithm, how am I going to see the Windows side if I want to?
Filtering you chose is different in that you can always choose to change your filter. A machine learned filter is fixed; you have no choice. Well, you can try to actively work against that filter, but that is hard work, much harder than just switching to a different site, or adding/removing a tag from your preferred tags.
That’s not an argument for making things worse. That’s an argument for making things better. That’s not an argument for letting the machine chose the community for you. It’s an argument for being member of several communities, so you get several different viewpoints.
For the topic specific site, I see the topic right in the title. I’m constantly reminded that I’m at a place that is biased in a certain way. When a machine learning algorithm determines what I see, it is intransparent what selection I get to see.
Also,the machine learning algorithm cannot adapt to the fact that at different timesI am interested in different topics.
Earlier today I browsed math.SE, but not worldbuilding.SE. On other times I browse worldbuilding.SE but not math.SE. How is the algorithm supposed to know that now I’m interested in seeing math topics, two hours later I’ll be interested in worlbuilding topics, and tomorrow I’ll be interested in programming topics?
Well, I guess I could get different accounts, and use them specifically for different topics, but then I effectively get back to topic sites, except with significantly higher effort. And without the option to effortlessly start browsing a new topic, as I’ll first have to train the machine learning algorithm on a new account that I’m now interested in that other topic.
So am I. I am familiar with this point of view and I do take it into account.
I have no idea what you mean by that, but using provocative language won’t win you any friends.
We’re drifting towards some similar metric like “total number of upvotes on answers”, just not emphasized as much as on SE — you’d see it on a user’s profile, not on every one of the user’s posts.
The topic experts aren’t always the ones who do the most curation (editing, voting, closing, etc.). Some people just want to answer questions, and that’s fine.
Failed beta sites show that this is not the case. A small core of experts is necessary, but not sufficient.
To educate? To pay forward? To get a bigger number? And I do expect to give credit.
Do you think you’re so perfect that nobody can improve what you wrote? I don’t. Why are you bringing credit in this discussion? AFAIR nobody has proposed removing credit.
Purely emotional language with no attempt to convey any meaning. Ignored.
That paragraph I agree with. It has nothing in common with the rest of your post.
I’m going to take this opportunity to remind everyone to please (a) stay on-topic, and (b) be civil. These are both in our guidelines, specifically thesethreepoints. Please keep posts to the topic of figuring out what makes this better than SE, and remember to respond to posts not people.
Wow, this thread got gigantic. And I’m definitely hoping we’re approaching some sort of conclusion by now…
Some comments - most are short, unfortunately not all:
tag badges are a nice feature in that respect.
While the problem indeed exists, I find that is rare. The exception rather than the rule, I’d say. At least on the technical sites where I participate(d) the most.
Here, I might agree with you in part. Meaning: I, personally, still believe a core “gamification” system to be necessary to the success of the platform; however, the mechanism of implementation can be improved in a number of ways. None of them major when taken in isolation, though. In essense, my opinion is that both a score and badges are required¹, but the precise way they are awarded can benefit from a revision. ¹: Otherwise, what you end up with is something that looks an awful lot like Quora. I’d hate to end up with something that looks like Quora.
Can’t that be achieved with less drastic changes to the core model, though? We can think of each shortcoming individually and evaluate simple, nondisruptive measures for each of them. If you go over some of my first proposals on Discord, you’ll find suggestions to discourage duplicate questions and answers, which included reverting all reputation gained (and we can improve by going further: repeated “offenders” - those who display a pattern - could be penalized by actively losing reputation by engaging in duplicates); you’ll find suggestions that facilitate question migration and to notify downvoters when a post is edited, so that downvotes can be used more freely; and one suggestion to minimize the “fastest gun in the west” problem (I can already think of improved ideas in that respect). Some problems are not technical but social, and can be mitigated in the form of policies, such as those that requires questions to show some research effort and to be useful - a relative concept, admittedly, particularly considering that a question can be useful for OP but not so much for others, but on the other hand I one can certainly identify when a user is asking something just for the sake of rep points.
There were at least a couple other great ideas by other members, and I can’t seem to find them now. They should come up again at some point, though. I was wishing for the @Contributor (/docs) folks to have gone through the archived channels and transferred the greatest suggestions over here, but it didn’t turn out that way. Not to mention totally different ideas that we have yet to contemplate, such as novel review queues for quality control (with some form of penalty in case of repeatedly obtaining poor reviews). Of course, with score penalization comes site privilege penalization as well. We use reputation as a currency of trust; you can gain trust in the platform by gaining reputation, and you can also lose it.
Well, just a friendly reminder, as you’re probably aware by now, that there is strong disagreement regarding both assertions (first two sentences)!
Remember 3-4 years ago? The landscape was vastly different. We had good willing people doing maintainance work, and even with the platform leaving much to be desired - and it still does - the community managed to have a high quality repository of knowledge. What happened since then? Not only the platform didn’t improve, but positive engament began declining as well. The way I see it, this is due to a sentiment that goes more or less like this: “If the powers that be don’t care enough, why should I?”
It’s a big shame, really.
Maybe we can follow the strategy used by Math SE and MathOverflow, for different branches of knowledge. One site aimed at highly skilled professionals, another one for beginners. Or maybe we could separate stuff on the same site (I suggested a beginner “meta-tag” here; there are other possibilities).
In any case, “just do X for me” will never make for good quality questions. Anyone who expects random people from the internet to dedicate their time in helping with their issues must always be willing to put up some genuine effort themselves and must demonstrate that.
And as an addendum, we should respect the fact that everyone deserves to have their questions hosted in our site(s), no matter how localized or “newbie” a problem might be. That is, as I just said above, provided the question passes all quality filters (not duplicate, shows research effort, is a genuine problem or one that a select subset of users will find interesting and useful, even if that is not the majority of users).
This specific brainstorm idea, in my opinion, is too complicated and potentially creates more problems than it solves. I advise against it.
The docs people did not sift through all the Discord stuff because there is way too much and it’s scattered in way too many places. We discussed having people who are so inclined culling that and linking relevant stuff in one place (the doc-todo channel); a few things ended up there and we handled those, but I think at this point, anybody who wants to bring something here from there should just do it individually.
It had seemed to me that we were reaching consensus to not have a single reputation number, but your message here talks about rep. The older Discord ideas would have because that was before the “do we need rep?” discussion; could you clarify if you are still advocating for a rep number?
I agree that gamification of some sort is important for enagement. That’s why I suggested breaking a raw rep number into various “tracks” and showing that information where relevant – # answers given, top tags, # edits, etc. Maybe (long-term) what’s shown on the user card is context-dependent – when an answer is from somebody holding a tag badge for a tag on that question, show that (for example). I don’t have a full proposal and it’s not MVP. I guess for MVP we’ll just have name and gravatar showing up on posts, though much more info can be shown in the profile. We should work toward showing more on posts. But if we compute and show a reputation number now, we’ll never be able to get rid of it later – so let’s not show it now.
Well, let us please continue this discussion on Discord (or open a new topic here and ping me!) . If you go with the second route, please mention our existing conversation there to make things easier for me and everyone to follow up.
I can totally understand that, believe me. From my part I will be transferring some of my own personal proposals, whenever appropriate or necessary.
Didn’t notice; there’re 110 messages here and I’m currently at #82. Still catching up.
I don’t necessarily advocate for a single reputation number, not strongly anyway. Perhaps we could do with discrete stats, like “x answers total, y accepted answers, z most voted answers, n helpful flags, r useful revisions” etc. Perhaps. At least for starters. Should we end up chosing to have a rep number, we’ll have many months to come up with a clever, sensible and fair method to calculate it based on these metrics. This can even be done post-launch. And it can be revised.
One thing is for certain, though - when it comes to privileges (system trust to perform actions), a single number would surely make things so much easier.
Whatever we end up deciding, the way such privileges are to be implemented must be defined and the specifications should be documented as soon as possible. It doesn’t have to be a complete, much less a final specification. Just something to get us started on dev work
Hah, similar thinking. I have yet to see these suggestions by you, sorry.
(…) (emphasis mine):
In my opinion that is debatable. But I can’t argue that right now.
This is a real concern. Without a single number, we have to manage multiple paths for someone to get to privileges. There are likely to be:
People who mostly ask questions, though I suspect after enough asking they will start answering if they stick around
People who only answer. That’s me on some SE sites - I find answers to my questions because someone else already asked the questions, but I stick around and help answer others
People who mostly edit, comment and in general “help” but only occasionally answer and rarely ask questions. These people work their way up in SE based on a bunch of +2 edits - on SO that’s “suggested edit is accepted: +2 (up to +1000 total per user)” - i.e., you could work up to a pretty high privilege level without asking or answering a single question.
So if we do something other than a single Reputation Number, we need to figure out how to let any active enough user get to reasonable privilege levels.
Joel had some regrettable ideas and this would be one of them IMO.
Again, I can’t help but be reminded of Stack Overflow Documentation. The awesome idea that endured months of poor quality contributions and still couldn’t manage to get rid of that culture after a few iterations - and ended up failing catastrophically. Not saying we can’t do better, but hey, we can also do better on top of something that was already better to begin with.
The asker is the one contributing the question though, and supposedly has a genuine problem at hand and we should consider helping both OP and future readers.
The recent changes were made for the sake of advertising traffic only. Terrible move. It was fine as it was before.
Yep, this is an extreme position. I hope we can aim for some middle ground!
Extremely good point!
Maybe not, but why differ for no good reason? For the sake of keeping things simple - and intuitive, I advise we display scores on both questions and answers likewise.
That isn’t a realist scenario at all… this simply does not happen as described.
Like we already do on Stack Overflow (and many other SE sites)? :) SURE! (it works, is all I’ve been saying).
vBulletin has a reputation system that doesn’t display a score number.
Discourse has a trust level system that doesn’t display a score number.
It is possible that both of these rely on algorithms that have internal trust score numbers, they are just never displayed. It is also possible that these algorithms simply take into account the various stats that would compose a trust score, instead.
It can be done. Easier or harder, but possible.
In this thread, we seem to have agreed on what we’re trying to build, albeit not so much on one specific aspect - the displaying of a public reputation score. It is probably time to declare consensus on the former. Regarding the latter, we can open a vote, or even another discussion thread. We can do that now or defer it to later months.
As for me, personally, I think I’d prefer the system to have public reputation, but this is something to keep in my mind for the weeks ahead. We can definitely get some ground work started without that.
We need to work all this out, of course. But one way to work on this is to build a schema that supports:
Metrics of various “events” (Q/A/Comments, Upvotes Q/A/Comments, Downvotes Q/A/Comments, Edits of other user’s posts approved, update profile, take the Tour, whatever)
Multiple Reputation Values based on various metrics (which if there is only one Reputation Value then this is like SE, but could be 2 or 3 or more)
Privileges granted based on one or more Reputation Values. (and if there is only one Reputation Value, this is like SE - but with multiple could be that you get “vote to close privilege” if you have a Q-based Reputation 100 or a A-based Reputation 200 or whatever)
Badges granted based on one or more Reputation Values or Metrics - e.g., Bronze/Silver/Gold “Q badge” at 500, 1000, 2000 Q-Reputation; “Editor badge” at 100 Edits; etc.
We build a flexible “Black Box” that does the calculations for each user whenever something changes. This would allow for pretty much any of the ideas that have been proposed and lets us move forward with the rest of the design while still working out the “best” reputation/privilege/badge system to actually use. It would allow total flexibility (since it is all database-driven) for changes system-wide or community-specific.
This thread has become really, really long. Also many posts here are also quite long. Please restrict responding to this post to responses, that really answer the question:
What are we trying to build?
Please reply here with broad, general suggestions about what our site should be. Do not reply with specific, detailed suggestions for individual features. They belong into standalone posts with the tags mvp (for features that should be required in MVP) or non-mvp (for other features).
Is there any proof it never has in any situation on any site in the SE network? I agree the way I contrived this particular case is unlikely and may never have happened, but it is a possibility.
I went extreme as the quick simple example, but does it work better if the +25 are from users with 10+ years of professional experience with the topic on hand in multiple environments with multiple projects versus -15 from users with less than two years of experience and have worked for a single company on a single project? Much more possible situation to occur, but also far longer to type and read.
Nevertheless the point I was making still stands; without a statistically significant number of votes to offset “odd” situations like those proposed, the actual vote count does not provide actual relevant data to the end user as they cannot know the “value” of those votes. They are simply left with the potentially wrong conclusion that more votes is better or that posts without negative votes are better than those with negative votes (if they can check).
The below in short: I would wish to see something build that allows better ways to cross-link questions and answers.
StackExchange has this by means of duplicate (which is more like closing the question) or by adding an overview of related and linked questions. But, I feel that it must be possible to do it better.
I wouldn’t be against any brainstorm idea before the brainstorm is over. But maybe I missed whether it had already been decided ‘what we’re trying to build?’ My particular brainstorm idea was related to Bertieb’s “we should welcome useful content from anyone”. If that is desired then we need ideas about how to make that workable (mixing MathOverflow and math.SE ; is that gonna work without any technical changes?).
I do not believe that there needs to be a lot of technical changes for this particular brainstorm idea:
Already now StackExchange is linking questions. One can mark questions as ‘duplicate’ and then the question becomes linked to the previous question. The downside is currently that those duplicate questions are also closed, any activity on them is stopped, and there is no link back from the older duplicated question.
Some softer way of linking questions together might be useful. Or at least, on the statistics page I encounter many questions which seem a lot the same (very often it is the same concept/topic but in a different setting and not different enough to become duplicate) and it would be nice to have more ways to bundle them (just some additional duplication/copy/linking option) rather than having them swimming around separately.
What this would potentially achieve is that a lot more questions will get attention and will work synergetically together instead of sort of ‘competing’ against each other. (On wikipedia: when initially looking for a single question/topic, then I often end up clicking on several links, a browser window full of tabs, and me spending several late hours to read it. On StackExchange this never happens)
Another example from statistics.SE : There’s an option to tag questions as homework questions. This provides a way to treat different types of questions on the same site. But it could be organised a bit better. Maybe not necessarily hierarchical, but… you have to come up with something when you wish to make it work for a wider range of quality/interest levels.
So that is the hierarchy I was thinking of: to have the possibility to have homework questions and link them to some canonical question; this would be just coupling two already existing functionalities on StackExchange. So technically it might not need to be so difficult. I agree with you that it might be more difficult for the community to organize the posts/questions, but this also depends on the goals of the particular community (personally, I would have little interest to grow the number of questions, but instead I would wish to improve and organise the current questions and answers).
I would agree that the suggestion I made about hierarchy may not be the best. But is there something better? Or… is the solution to do nothing different at all?
To get back on topic. Do we want to build Q&A site that is a vanilla copy from Stack Exchange / Stack Overflow? (which I believe is only suitable for the mono-culture communities and does not allow a mixing of “content from anyone”) Or do we want to build a more modern adaptation that improves SE/SO flaws?
It is not so clear to me in all these dozens of posts (also in other topics) in which directions we wish to deviate from Stack Exchange / Stack Overflow? Obviously there is the management issue and related stuff like licensing; and we need to discuss ways to import and export data/content. But are there other ways in which the site is ought to be different? Those other ways are being discussed on a technical level but it is not made very clear whether is actually gonna be necessary. When the Codidact community is not clear about this then a lot of the discussions ends up long, repetitive and confusing.
Maybe we should quickly get some clarity about potential users such that these discussions about features can be much more streamlined (goal/user-oriented; rather than hypothetical)?