This is the problem, rather than the issue of specific pronouns.
Once any user has apparently refused to abide by the “be nice” policy, a conversation should happen, hopefully first between the offended and the offender. That conversation attempt could result in various outcomes, including:
Offender didn’t realize what they were doing
Offender did know what they were doing, but will try to be nice in the future
Offender points out how they were misunderstood
Offender raises reasonable concerns about their own ability to comply with offended’s request
Offender refuses to communicate about the issue
Offender makes it clear that they don’t care about offended
All of these outcomes provide something for a moderator to work with further, hopefully bringing people together in the process.
The proposed and edited version at the top is significantly different than the posted one, specifically the inclusion of undefined “hate speech.” This seems to have come out of left field. I’ve read through the comments, and even did a text search to make sure I’m not missing something. I posted this already in discord, but that seems very foreboding and subject to the same abuse in defining “hate speech” that SO is susceptible to. Why is the final version not more like the proposed and discussed version?
I’m currently wrestling with whether to start contributing to the project (I’m eager to use my skillset to assist in creating an alternative to SO) but am looking to protect myself from another disappointing situation like SO is having, and the inclusion of undefined “hate speech” here is an immediate barrier to committing myself. The proposed version looks great.
That has been discussed. Have a read through the whole of this thread - am alternative wording to my original was proposed in the middle of the discussion, which had more support than the original, so that’s what we used instead after discussing and finalising it.
The enforcement of this will fall to moderators on the sites we host; here on the forum and other organisation spaces, it falls to the leadership team. Like the rest of the code, it’s to be applied in spirit, not in letter.
Hate speech is not an undefined term IMO. It’s speech (here: written text) that harms or threatens to harm a group or an individual. That’s the dictionary definition.
Anyone with malicious intent can try to weaponize or abuse any CoC. No amount of definition is going to prevent that, when that person is really determined to. The consequences may differ, though, whether that someone is a server user or admin.
No person aiming to be reasonable, constructive and friendly should be able to being targeted by correct application that rule.
I just realized this while reading the updates in this thread:
Everywhere else it’s “we provide the technology, communities govern themselves”.
In this case here, we want to impose certain rules upon how communities have to do things.
Don’t get me wrong, I like the CoC, but this is inconsistent. Maybe we could offer this to communities as a sort of standard?
What if they don’t?
Are there gonna be Codidact overlords, who punish communities if their moderators don’t manage to implement it well enough?
That’s some muddy waters we need to clear first, PLUS we’ll need to work out another layer of management on the technical side (moderators of community moderators).
My understanding is that the CCC (Codidact Code of Conduct):
Is a policy, not software code, therefore…
It applies to the development of Codidact (this Forum, Discord, Github, etc.)
It applies to the primary instance which will be operated by Codidact, including all communities included in that instance
It does NOT apply to any other instances of Codidact hosted/run by any other group of people (or even the same people - e.g., if some people help develop Codidact but run their own separate not-part-of-the-Codidact-organization instance)
So yes, there will be some top level of Codidact management that will oversee the Codidact primary instance hosted communities. Should they determine that a community is violating the CCC and the two groups (Codidact and the Community) are unable to work out a solution, then the Community would no longer be hosted on the primary Codidact instance, but they would be given full access to their community data to take it and host it elsewhere, either in their own Codidact instance (“easy” as far as the data) or to do whatever else they want with it.
Exactly as @manassehkatz said. This CoC only applies to places that the organisation owns or runs - so, it applies to all communities on the instance we host, but not to anyone self hosting.
If the issue of moderators not enforcing it comes up, in the first instance our own moderation staff (yet to work out who that is) will simply do the enforcement themselves and communicate that to the community’s moderators. We’re not looking to “punish” anyone, just for consistency. If it becomes a persistent issue such that it’s clear that a community is at odds with our CoC, then we’ll work with that community to migrate them smoothly to self hosting, or to using another platform. No bad breakups.
Why should we look for consistency? If linking to “Let Me Google That For You” is deemed harmless joshing on Philology but unwelcoming rudeness on Cheese Rolling, what’s the problem? Who bar prigs would volunteer for the job of imposing consistent application of the Code of Conduct’s principles upon diverse communities willy nilly. And if it’s not for each community to arrive at a consensus among themselves on how to apply them in particular kinds of cases, oughtn’t the rules to be written in detail, despite the cogent arguments already made against doing that?
It’d be better, & easier, for Codidact not to have meddling overlords at all; to allow communities to govern themselves on the understanding that if they really do put themselves beyond the pale they need to shape up or ship out.
Devil’s advocate: I’m rather tepid toward a project-wide CoC, but the intention and wording so far has been a common-denominator, generally-worded, spirit-of-the-law arrangement. I don’t believe the intention is to enforce consistency to this degree of detail.
I don’t think the pejorative “overlords” really belongs here, but besides that, it sounds like you’re in violent agreement. The point of the CoC is to define and explain the pale ahead of time, so we all know what to expect and when a community should be split off into its own independent organization.
I don’t think it’s constructive to call an efficient, working word bad without providing a better alternative.
I used the “Managers of Community Managers” before, but I presume you don’t want everybody to write that.
However, IMHO that’s only a symptom. The crux of the issue is, rather suddenly (in my view) we established here that we’ll add another layer of management. And nothing about the highest level of management has been worked out. Functionally… and based on that, technically.
Sure, ArtOfCode made a statement there about some part of how it will function, presumably after the best of his knowledge and conscience - but this is not something that has been established, even less so by the community. It have neither seen it mentioned anywhere in this forum, nor in Monica’s specs.
Our instance is responsible to an extent for the content it hosts; therefore the instance has standing to impose rules on its communities (and those communities are free to go elsewhere, and we will help them with that if needed). Our instance is, frankly, the result of a provider imposing rules in ways the community doesn’t agree with; we do not want to be like that. We do not want to be abusive hosts, and we will strive for the lightest touch that gets the job done. We presume good intent and expect it of others; when there is a difference of opinion the first step should always be a cooperative conversation aimed at resolving the difference. We are deliberately leaving a lot to human judgement, checked by transparency – we don’t abide single people making secret decisions.
Who has authority to represent the codidact.org instance? Huge question, and it’s tied into the not-yet-started process of establishing a legal entity (which will require bylaws). Until we have something better, I think the trio of leads here – Art, Marc, and I – are the acting caretakers of the instance, strongly informed by the contributors here.
@tuggyne, I agree with your assessment of the tenor of the discussion so far; that’s why I was surprised, & a little alarmed, to learn that there’ll be “moderation staff” enforcing the CoC by their lights across the board & communicating to—not with—communities’ own moderators. “Overlords” is hyperbole, but it doesn’t sound as if they’ll be very congenial. (I’ve re-worded that bit, however, as I didn’t mean to be calling the current leaders of Codidact “overlords”—or the future leaders if they don’t act like overlords.)
As @cellio said, we are ultimately responsible for the content and the communities we host on our instance. Communities self-hosting a Codidact instance? Not our problem. Communities within our official instance? Definitely our problem. While our intent is to allow communities to self-govern to the maximum extent possible, we can and must be able to moderate anything that we host.
Imposing a simple, common-sense CoC like this on all communities on our instance is not unreasonable; it sets out the ground rules and ensures everyone is aware of what will be moderated and what will not. Again, in the vast majority of cases this moderation will be done by the community’s own moderators. It’s only in the extreme case where a community’s own moderators can’t or won’t enforce this CoC that the instance admins (aka “overlords”) will have to step in. At that point, we need to make clear that the CoC is non-negotiable - so yes, we do have to do some communicating “to”, but it’ll still be a discussion, and it’ll still be collaborative.
@ArtOfCode, that’s already reassuring. Is it then that the instance admins will concern themselves only in the case of a community’s tolerance of gross rudeness, hate speech, &c.; & that different communities will have the liberty of setting standards of behaviour that they feel appropriate for their sites within broad parameters? (What did you mean by “consistency”?)
Communities are free to add to the baseline expectations if they deem it appropriate. For example, certain sensitive topics on some communities require extra care, for example due to modesty. (I’ve seen this on some religion sites.)
Correct. Instance admins will only step in if the community’s moderators fail to uphold this CoC - again, as @cellio says, if a community wants to augment that, that’s completely up to them. The CoC is a minimum, baseline standard - if it’s insufficient for any given community, part of that self-governance is that communities will be able to recognise that and add to it.
Have you thought about or know about technological counter-measures – how to deal with persistent trolls (re-posting after they’ve banned) and/or spam? How much work is that? How can it be automated?
I think that SE’s own counter-measurements were existent, multi-layered (i.e. there’s more than one), and a bit of trade secret (for good reason, since knowing how it’s implemented might help to evade it). Will it be a problem in real life? A bad problem?
This isn’t something we’re likely to do in MVP, because it’s not a problem we’ll have in volumes too large for humans to deal with until we’re a little bit more established. Definitely something that’s worth looking at as a feature for v1.1, though.