Proposed Code of Conduct

Yes I did. And I stand by it. But that has nothing to do with discriminating against certain “undesirable” demographics. It is, as Monica pointed out, all about discriminating against people who employ certain undesirable behaviors.

Yeah, we ended up with a fairly similar policy on Christianity.SE. It was the only reasonable way to keep order on the site, and as a member of a fairly marginalized group within the Christian world, I personally appreciate it. It’s an idea that’s stood the test of time and done well for our community. If the same idea works well on Buddhism.SE, that would seem to suggest that it’s a good idea in general.

3 Likes

Fair enough – I originally wrote something much stronger and then backed off. People can have all sorts of opinions, and if they express them respectfully then that’s fine even if they’re wrong (excuse me, if I disagree :slight_smile: ), but what’s not ok is weaponizing that outrage and shutting down those people whose perspectives differ from yours.

2 Likes

Well that “idea” was actually what the “community” said they wanted – i.e. the people who created the site off Area51 and were then active on Meta. Even quite early they asked meta-questions about (and proposed solutions to) several kinds of problem, including sectarianism. Like they were concerned about that – I guess that was from their experience of previous sites. And now, those are my marching orders (the policies which guide me and which I try to implement for them).

One of the original moderators told me that questions have a “slope” – that some questions incline towards being easy to answer … and some have an incline towards flame-wars (including questions which ask for a naive comparison between schools), and so it’s better not to ask those.

So anyway, selecting a topic is a “first line of defence” IMO – if it’s a Q&A site like SE and not a chat site. A lot of the personal views or differences which people might get heated or defensive about are simply off-topic – I don’t even see it as bad behaviour, just off-topic content to excise or wipe.

As a site+community, we also – which is unusual/unorthodox for SE – chose to discourage users’ posting questions to self-answer them: in case self-answering might be used for preaching or like spamming (so if you want to self-answer you can post elsewhere, on a blog or on YouTube etc., we stick more strictly to the “Q&A” format and user-scenario).

Well yes.

But even respectfully that’s still only within limits.

Because some (or many) personal opinions are about off-topic topics, IMO.

Buddhist doctrine (forgive me again) about “Right Speech” implies that a statement’s being “true” (or even just arguable) isn’t a sufficient justification – that it must also be, “said at the proper time”, etc.

And users – people – are kind of off-topic, IMO, as I’ve mentioned.

There’s an English joke, an anecdote – apparently Oscar Wilde said he could, “talk about any subject”. Someone asked him to talk about “the Queen”: and he replied, “the Queen isn’t a subject”.

So as moderator I can tolerate conversation while it’s friendly, but there’s no requirement to permit it.

Yes.

I do see it as unskilful to participate in outrage, and ditto to “weaponize” anything – be it words or moderators’ tools.

The cessation of negative emotions (like anger) is a core topic of Buddhist doctrine, which has and is a lot of analysis about what might “condition” them.

1 Like

Yes, sorry for not being clear. Off-topic is off-topic. I meant that if people are being respectful, then merely being a perspective that one disagrees with is not grounds for removal or disciplinary action. Diversity means sometimes encountering perspectives that are foreign to you or even (from your perspective) wrong, and we should not write a code of conduct that can be weaponized against the other person.

Judaism also has expectations about speech that go beyond the societal baseline. Lashon hara, evil speech, is a real thing to us, and encompasses way more than direct personal attacks. As in Buddhism (as I understand you), it’s a personal requirement, not something we think we can impose on the world at large (even if we think the world would be a better place if everyone followed these principles).

4 Likes

Hi, yes, I’m one of those too, on multiple counts.

To be honest, a CoC can only be weaponised if you let it. If instead you are clear that the CoC says and means this, and is to be followed to the spirit rather than the letter, then any troll trying to weaponise it has no leg to stand on.

You’re also making a whopping assumption that I’m not a member of any such marginalised groups, and thus that I’m white-knighting rather than representing my own interests.

A Code of Conduct is not law. This is not anything like real-world revolutions. This is words on the internet, not bullets in real life. Don’t over-dramatise. You still seem to be under the impression that I’m recommending we use a long-form, exhaustive-listing CoC like the Covenant: I’m not - we’re talking about a few common-sense based rules that I wouldn’t expect anyone here to run afoul of in any case.

@cellio - the CoC that’s in the wiki was based on your last proposal, with a few small changes, so the two are fairly similar. What do you make of the changes?

For future context (should the wiki version change later), here’s the text we’re talking about:

This Code of Conduct applies to all online spaces run by the Codidact organisation, as well as official physical meetups of its community.

Be nice; be respectful

We’re deliberately not setting out everything that is and isn’t allowed - bring your common sense, and apply the spirit of this Code. The following are definitely not okay:

  • Rudeness
  • Hate speech
  • Ad hominem attacks, whether directed at individual people or groups
  • Any kind of harassment, for any reason

Always be constructive, especially when giving feedback. Always presume that others are acting with good intent.

If you see anything that appears to be a violation of this Code, flag or otherwise report it. We take reports seriously, and although we may not be able to follow up with you, we will take any necessary action. We’ll keep your identity private.

This seems like a good compromise to me. The intro paragraph about common sense is important, as is the part that says we actually follow up on flags (confidentially). (Still not sure about the part about physical meetups, but meh.)


But the more concrete details you specify the easier it is for it to be weaponized, and we have seen this happen. A code that’s more about principles, paired with moderators to make those human judgement calls and take action, is far more effective at supporting a community – which is why we’re all here.

People are complicated and multi-faceted, and that means well-intentioned people will sometimes say things that other well-intentioned people object to. When that happens, the best outcome is respectful mutual education, not a big stick. When a community is aware of an issue and can respectfully discuss it, we can collectively find ways to meet everyone’s needs. When some people get to instead threaten others, that fact alone makes it more difficult to even have that respectful discussion/problem-solving – as we have seen.

7 Likes

I’m very happy with the currently posted CoC, and I hope the spirit of it does not change… well, ever.

In that vein, what are your thoughts about making an accompanying statement saying that no revisions to this CoC will go into effect until the community (or just the moderators) get a chance to discuss and vote on it?

Sort of a “government of the people, by the people, for the people” concept.

1 Like

How will this be handled in relation to this MSE post?

Veterans shouldn’t be harassing new users, new users shouldn’t be harassing veterans, nobody should be harassing anybody. That’s clear. What’s less clear is what will and will not be considered harassment.

Is telling a user they did something wrong harassment?
Is complaining about another user’s comment harassment (at which point does it become an ad-hominem attack)?
On SE, constructive feedback is too often assumed to be negative, damaging and unwelcoming lately. Some of it should be better, but there’s a lot of false accusations going on as well. How are we going to handle that?

We need to prevent toxicity without going down the rabbit hole and codifying everything we can think of. This site will eventually attract both trolls and overeager SJWs offended by our mere existence. I think this place wants to cater the middle and reject the outliers, but how are we going to do that?

1 Like

It appears, that you are refering to a prior version. Can you look at the final version and say, whether the issue still exists?

Yes, I think it still exists. The current version states:

  • Any kind of harassment, for any reason

Always be constructive, especially when giving feedback. Always presume that others are acting with good intent.

So pointing out harassment in a negative manner (a case where whoever points it out has ill-intent) is unacceptable. Good. But it’s not always that easy to see the difference. One can attempt to constructively point out a flaw and still be perceived as harassing, we’ve found out. Hence my previous post.

1 Like

Ah, I see now, what you’re refering to.

First of all, there are intentionally no hard rules, because they can lead to “rule-lawyering”, where people argumentate that their behavior is not caught by the rules, although it is obviously against their spirit:

We’re deliberately not setting out everything that is and isn’t allowed - bring your common sense and apply the spirit of this Code.

Whether something is acceptable or not is ultimately decided by the moderators/admins handling the reports. That decision can then be appealed by going to a meta-category, which we plan to implement. In Discord and here, there can be private messaging and a meta channel/category exists, too. We plan to have a “Moderation Review Panel”, where one can appeal to.

Of course, these people are also bound by the rules of constructiveness, presuming good faith and common sense.

If you see anything that appears to be a violation of this Code, flag or otherwise reports it. We take reports seriously, and although we may not be able to follow up with you, we will take any necessary action. We’ll keep your identity private.

In most cases, where some users complain, but the matter isn’t clear, the content will either be “sanitized” (edit problematic stuff out) or deleted, without any consequences. If the moderator sees a flag, where the reason for why some content is rude is not clear, they’ll probably follow up with the flagger and ask them for an explanation.

3 Likes

Agreed. List policy interpretations that have come up, but keep it to a separate document, and make it clear that the list is not exhaustive.

Even with a minimalist CoC to the effect “be nice, do not harass, presume good faith, flag any serious violations and staff will deal with them,” there still needs to be agreement between staff and the broader user base on how staff would handle a report from an eager social justice advocate. Consider the following, for example: “(username) and (username) have consistently referred to me as they and them and refuse to use ze and hir after being corrected. I consider this behavior to be an ad hominem and hate speech against the group ‘nonbinary people who use neopronouns.’” Lack of agreement on how to handle this led to this project in the first place.

So a new user could look at the public moderation guidance meta posts linked below the CoC and choose to join or not to join a site based on the presence or absence of policy interpretations like these:

  • “Intentional misgendering is harassment.”
    I would accept this.
  • “Use of singular they and them as a substitute for a neopronoun is harassment.”
    I imagine many users following this project would not join.
3 Likes

This is the problem, rather than the issue of specific pronouns.

Once any user has apparently refused to abide by the “be nice” policy, a conversation should happen, hopefully first between the offended and the offender. That conversation attempt could result in various outcomes, including:

  • Offender didn’t realize what they were doing
  • Offender did know what they were doing, but will try to be nice in the future
  • Offender points out how they were misunderstood
  • Offender raises reasonable concerns about their own ability to comply with offended’s request
  • Offender refuses to communicate about the issue
  • Offender makes it clear that they don’t care about offended

All of these outcomes provide something for a moderator to work with further, hopefully bringing people together in the process.

4 Likes

The proposed and edited version at the top is significantly different than the posted one, specifically the inclusion of undefined “hate speech.” This seems to have come out of left field. I’ve read through the comments, and even did a text search to make sure I’m not missing something. I posted this already in discord, but that seems very foreboding and subject to the same abuse in defining “hate speech” that SO is susceptible to. Why is the final version not more like the proposed and discussed version?

I’m currently wrestling with whether to start contributing to the project (I’m eager to use my skillset to assist in creating an alternative to SO) but am looking to protect myself from another disappointing situation like SO is having, and the inclusion of undefined “hate speech” here is an immediate barrier to committing myself. The proposed version looks great.

1 Like

@ArtOfCode can you address this please? It’s a reply to me but it really belongs to you. Thanks.

That has been discussed. Have a read through the whole of this thread - am alternative wording to my original was proposed in the middle of the discussion, which had more support than the original, so that’s what we used instead after discussing and finalising it.

The enforcement of this will fall to moderators on the sites we host; here on the forum and other organisation spaces, it falls to the leadership team. Like the rest of the code, it’s to be applied in spirit, not in letter.

2 Likes

Hate speech is not an undefined term IMO. It’s speech (here: written text) that harms or threatens to harm a group or an individual. That’s the dictionary definition.

Anyone with malicious intent can try to weaponize or abuse any CoC. No amount of definition is going to prevent that, when that person is really determined to. The consequences may differ, though, whether that someone is a server user or admin.

No person aiming to be reasonable, constructive and friendly should be able to being targeted by correct application that rule.

2 Likes

Alright, thanks for the reply. I didn’t realize it was closed for discussion at this point.

I wish you guys and gals success in your endeavor.

I just realized this while reading the updates in this thread:
Everywhere else it’s “we provide the technology, communities govern themselves”.
In this case here, we want to impose certain rules upon how communities have to do things.

Don’t get me wrong, I like the CoC, but this is inconsistent. Maybe we could offer this to communities as a sort of standard?

What if they don’t?
Are there gonna be Codidact overlords, who punish communities if their moderators don’t manage to implement it well enough?
That’s some muddy waters we need to clear first, PLUS we’ll need to work out another layer of management on the technical side (moderators of community moderators).

2 Likes

My understanding is that the CCC (Codidact Code of Conduct):

  • Is a policy, not software code, therefore…
  • It applies to the development of Codidact (this Forum, Discord, Github, etc.)
  • It applies to the primary instance which will be operated by Codidact, including all communities included in that instance
  • It does NOT apply to any other instances of Codidact hosted/run by any other group of people (or even the same people - e.g., if some people help develop Codidact but run their own separate not-part-of-the-Codidact-organization instance)

So yes, there will be some top level of Codidact management that will oversee the Codidact primary instance hosted communities. Should they determine that a community is violating the CCC and the two groups (Codidact and the Community) are unable to work out a solution, then the Community would no longer be hosted on the primary Codidact instance, but they would be given full access to their community data to take it and host it elsewhere, either in their own Codidact instance (“easy” as far as the data) or to do whatever else they want with it.

4 Likes