Hate speech

Service: Narrative

Hate speech

Service: Narrative

We've had an interesting thread of comparing Narrative to Minds here.

I just came across the following article, which revealed a strong far-right political undercurrent on Minds. 

https://web.archive.org/web/20...ork-extreme-content/

The article is a bit old, and it may be that their platform is now less gummed up with neo-nazi content but it still raises the question of how Narrative would deal with things, if the Daily Stormer decided it wanted to become active on the platform.  What are the thoughts of the @Narrative Network Team, I wonder?

There is a tendency for platforms that promise freedom from censorship, to attract all those who have trouble finding a home for their politics because they are imbued with hate and violence.

It seems to me we've been very fortunate so far, to not have attracted any measurable interest in Narrative from such folks, but we should be prepared for that to potentially change once we launch the open Beta.

Our Acceptable Use policy mentions users should not engage in bullying, threatening, or incitement of violence.  But how about Holocaust revisionism?  Or racial slurs?

The Acceptable Use policy also doesn't really spell out what the consequences are if users do engage in the behaviours it does pronounce itself against.

This close to launch, I imagine the Team's bandwidth for tackling this in conversation with the community might be limited, but I am very keen to hear what other members think.  

Original Post

Activity Stream

I am on my way to work and cannot fully tuck into this conversation. Plus i need some time to think about all the ramifications. I know @Malkazoid you own the niche racism, and I own the niche equality, so it is safe to say where we actually stand on the spectrums of hate speech, and online bullying for that matter. 

But sadly to have free speech, is to have free speech. I am a bit more concerned with publications than niches, because I weirdly trust in our initial base to set a positive tone for content quality through our voting system (and in this case i am directly referring to the depth of the content, not how it is written, because that is a bigger value in my opinion). 

Hopefully, collectively we can vote down content that explores hate speech, and stuff like holocaust deniers propaganda.

Anyways...I kind of need to resonate on this one, and read the link you provided when i have a bit more time. That was just my instinctual response.

I am inclined to side on the side of free speech.  I'm an American, and from what I've seen / read about more restrictive speech all over the world, my vote would certainly be to adopt an American approach to free speech.  In my opinion, the best way to combat things you don't agree with / find distasteful is more speech / voting, not censorship.  It drives me up a wall when I see platforms implementing their TOS selectively as well...you see a news story every day come out about how Twitter or Facebook is banning another person or content because they don't agree with it politically or find it distasteful, or enough 'squeaky people' come out and start kicking up dust, even though they are such a small minority.  I think Narrative should let the community take care of this through voting / quality ratings.  If people want to deny the holocaust in their Niche, then I say let them, even if I find it horrible.  If people want to deny vaccines work, then I say let them.  If people want to say the earth is flat, then I say let them.  Unless there is a specific violation of terms, like pornography or violence directed at an individual, I say it should be allowed.

Also...this just came to mind...I believe Narrative is already doing this, but we shouldn't be allowing 'down-voting / disliking' on a post to prevent brigading of any sort...if the post violates TOS, then it should be reported...if you don't agree with the content / find it distateful, simply don't up-vote it, much like the Medium 'clap' approach.

Banter posted:

I am inclined to side on the side of free speech.  I'm an American, and from what I've seen / read about more restrictive speech all over the world, my vote would certainly be to adopt an American approach to free speech.  In my opinion, the best way to combat things you don't agree with / find distasteful is more speech / voting, not censorship.  It drives me up a wall when I see platforms implementing their TOS selectively as well...you see a news story every day come out about how Twitter or Facebook is banning another person or content because they don't agree with it politically or find it distasteful, or enough 'squeaky people' come out and start kicking up dust, even though they are such a small minority.  I think Narrative should let the community take care of this through voting / quality ratings.  If people want to deny the holocaust in their Niche, then I say let them, even if I find it horrible.  If people want to deny vaccines work, then I say let them.  If people want to say the earth is flat, then I say let them.  Unless there is a specific violation of terms, like pornography or violence directed at an individual, I say it should be allowed.

I tend to agree.  Holocaust denial, as vile as it is - and going hand in hand with more dangerous tendencies though it does - is not in-and-of-itself a direct effort to foment hatred.

Where I think things become more problematic is when people start expressing hatred against a gender, a sexual orientation or an ethnicity/skin color.  People like to take a freedom of speech stance on those sorts of things too, but when it comes down to a more pragmatic approach, that sort of expression is objectively damaging, not only the individuals receiving the hatred, but to the fabric of society.

Imagine if pragmatic laws against anti-semitic hate-speech had cooled the insanity of the Nazis during their slow ascent to power in Germany of the 1930's.  When society allows that sort of wave to rise, it can reach a point where it irresistibly spills over into horrific violence.  

We also have to give some practical thought to the idealism Narrative displays about Community ratings, and the wisdom of the crowd, taking care of everything and making everything ok.

These sorts of materials are so repulsive to ordinary people - most will not want to get close enough to them to vote them down.  The people who DO want to hang about this sort of expression will tend to be the people who vote it up.  This effect will be intensified, the greater the numbers of extremists who show up.

Just take a look at youtube video comment sections, when the video is about something that gets fascists all excited - like the shooting of an African American father of three, in front of his wife and children, in the parking lot of a convenience store in Florida.  

I'm not necessarily coming down on either side of this debate yet - just providing you with some counter-arguments.  Right now, we are experiencing what may be a rather misleading period of freedom from this sort of problem.  Decentralized sites are inherently attractive to extremists, and if/when they discover Narrative, we may end up having so many of them here that they become an overly dominant flavor.  There is a reason the author of the article I linked to says he found Minds made him queasy.  Everywhere he turned, unhinged racist propaganda wasn't far away, it seems.  This sort of environment can dissuade people who are not obsessed with race from participating on a platform.  We could find ourselves missing out on good Narrators because we're so welcoming to vile ones.  I can tell you reading that article certainly made me decide not to hang out on Minds, and I do think that's their loss.

So I think there is perhaps more thought required than to fall back on the default 'freedom of speech' angle.  I love the ideal of pure freedom of speech, but I also want to look at where that might take us as a platform.

Banter posted:

Also...this just came to mind...I believe Narrative is already doing this, but we shouldn't be allowing 'down-voting / disliking' on a post to prevent brigading of any sort...if the post violates TOS, then it should be reported...if you don't agree with the content / find it distateful, simply don't up-vote it, much like the Medium 'clap' approach.

Last I saw, I thought they were allowing it but your opinion was modified by your rep... Am I wrong?

My basic impression was they were using a similar system to the Black Mirror episode... https://en.wikipedia.org/wiki/Nosedive_(Black_Mirror)

Otherwise the only ones who can keep things clean are the mods... 

I find  "freedom of speech" as a default to be inadequate, as is the term "hate speech." They're just too poorly defined. Not sure what's going on other platforms, but I do hope that on Narrative the default will be what's legal and and what isn't.

Slander (libel), defamation, harassment, threats aren't legal. And there are a number of new-fangled INTERNET versions of abuse that, at the very least, walk all over privacy rights.  Doxing would be one example.

Most places have ramped up anti-discrimination laws, but might lack consistency. These days, I think it's pretty hard to host a large platform without some kind of access to competent human rights law advice to aid with figuring out TOS.

That said, I'll have to read the TOS again, but with it as a guide, i think that speech that violates the TOS must be reported so it can be removed and dealt with.

If I think it's illegal speech, but isn't covered the TOS, I'd be inclined to report it anyway ... even if it turns out I'm wrong. 

I think it's appropriate to refrain from voting down what I disagree with.

If there's considerable evidence to the contrary to what's claimed - IE the post is misinformation, but isn't illegal to say - then what? My inclination would be to down-vote. But perhaps if I have time to comment, should do that instead ... or both?  Or again, just ignore it? 

What I've done on this last bit has been on a case by case basis - or perhaps, mood by mood ... maybe now's the time I imposed a firm policy on myself ...

Glad this came up before launch!

 

 

 

 

Having trolled around Minds a bit, the far-right bias is definitely still there. Not sure if it used to be worse. 

No matter the regulations and policy, at the end of the day the tone is set by users and their expectations. Speak out. Speak up. It's the only way. And don't be afraid to get comments down voted by folks who need standing up to.

As for policy, less draconian is both better for the site overall and for real societal change-making IMHO.

@Ledeir 's link is pretty funny 

Up voting is intended to get good content to the top, fair enough, but down voting seems to have some potential for abuse plus it's completely vague.

Useful for burying spam and copy/paste. But as for disagreement I go for just not voting.

I agree with @MichelleG when commenting to make a needed point worrying about getting down voted on it shouldn't be a concern. 

MichelleG posted:

Having trolled around Minds a bit, the far-right bias is definitely still there. Not sure if it used to be worse. 

No matter the regulations and policy, at the end of the day the tone is set by users and their expectations. Speak out. Speak up. It's the only way. And don't be afraid to get comments down voted by folks who need standing up to.

As for policy, less draconian is both better for the site overall and for real societal change-making IMHO.

Same here. The frontpage of Minds gives a welcoming impression, featuring high quality posts about unique art, photography, and decentralized social networking. However, clicking around on those posts, it doesn't take a lot of effort to enter neo-nazi territory, learn about the ballistic solution to trespassers, and find out how skin color apparently still matters.

It's disgusting. No wonder their member signup rates are dropping.

I've not visited Minds, but have noted it has a bad rep. I figure what I'm free to say is tied tightly to what I'm free to do - IE free speech is protected by rule of law and subject to legal boundaries in return for this protection. Minds is just one platform where some who can't see past the ends of their own noses view free speech as anything goes, and their rights as exclusive - they deny that anyone disagreeing with them have any rights at all,  If people aren't signing up there it isn't surprising. 

Slaz posted:
MichelleG posted:

Having trolled around Minds a bit, the far-right bias is definitely still there. Not sure if it used to be worse. 

No matter the regulations and policy, at the end of the day the tone is set by users and their expectations. Speak out. Speak up. It's the only way. And don't be afraid to get comments down voted by folks who need standing up to.

As for policy, less draconian is both better for the site overall and for real societal change-making IMHO.

Same here. The frontpage of Minds gives a welcoming impression, featuring high quality posts about unique art, photography, and decentralized social networking. However, clicking around on those posts, it doesn't take a lot of effort to enter neo-nazi territory, learn about the ballistic solution to trespassers, and find out how skin color apparently still matters.

It's disgusting. No wonder their member signup rates are dropping.

Ok - sounds like I should really take the time to have a look around and see how bad it is there.  I'm really hoping we can take the steps to avoid leaving decent folks feeling nausea from pervasive far-right content when they come to Narrative.

This happens on decentralised platforms because so few mainstream platforms want to host their content - if we want to be the decentralised platform that goes mainstream and achieves mass adoption, we have to consider very carefully that neo-nazi content may not be compatible with that aspiration.

Yet another bullet we may well have to dodge in the coming months.  Lets keep good track of these issues so we can remind the Team of them when needed.

I'm very disheartened by this discussion, because I fear that the concerns are well founded.  I've seen one content platform after another become infested with bigotry, hatred, anti-science screeds, nationalism, and so on.  

I support the First Amendment - which says that the US government may not regulate speech.  But I am not a supporter of "free speech" on private platforms. This simply gives a license to extremists to spread their venomous beliefs.

I would strongly favor seeing the Narrative AUP extended to prohibit hate speech of any kind, with the penalty for violating the AUP being a permanent ban from the platform.  

Hi @Robert Nicholson,

    I also don't want to see the platform devolve into what they are describing on Minds.  The problem is that it is such a slippery slope. You will never get full agreement on the definition of hate speech...quite often people think speech they disagree with or find distasteful is 'hateful'.  I feel like bigger systems can kind of absorb the good with the bad because of their size, it would be difficult to overwhelm them.  However, a new budding social network, like ours, or Minds, can easily be inundated with 'hateful content' and seem to be comprised mostly of that.  So the question becomes, how do you protect Narrative and help it grow up to be of a size, to truly reflect society, such that, there will be some people spewing hate, but they will be drowned out by the rest of society that doesn't, and it will be easy to ignore them.  I'd like to rely on the 'wisdom of the crowd' to determine quality and get rid of some of this stuff.  I am also fearful of those same mechanisms being used to bury speech you don't agree with, especially religious / political speech.  I feel like if we are going to downvote something, whether it be a post, or a comment, you should have to provide a reason, much like you do with Niche voting.  If you mark something as 'hateful', or whatever term we come up with, if it is determined by the mods to not be hateful, then the person doing the incorrect categorization should take a reputation hit to prevent them from just flagging everything under the sun they don't agree with.

Banter posted:

...

I feel like if we are going to downvote something, whether it be a post, or a comment, you should have to provide a reason, much like you do with Niche voting.  If you mark something as 'hateful', or whatever term we come up with, if it is determined by the mods to not be hateful, then the person doing the incorrect categorization should take a reputation hit to prevent them from just flagging everything under the sun they don't agree with.

This makes sense to me, having to provide a reason for why you downvoted. 

If you have to provide a reason, which can be verified is most cases, then you are less likely to simply downvote because you disagree... 

 

Ok - so from my quick jaunt around Minds - it turned out to not be quite as bad as I expected.  Extremist views are definitely over-represented, but not to the degree I had imagined.

It may be that Narrative will suffer a bit less from it because our initial members don't seem to be cut from that particular cloth, so we'll start with something of a balanced first impulse.  

I think I agree with @Robert Nicholson though - forbidding hate speech would be a good precaution.  Having that in the AUP from the outset doesn't mean enforcement has to take place at the first signs of extremist presence - but if things start to get out of hand, a pre-determined plan of action can kick in without anyone being able to say the rules were changed.  

@Banter's comments are important - but ultimately that slippery slope exists with all terms of the AUP, not just hate speech.  Where does bullying cross the threshold?  Where do threats cross the threshold?  Where does inciting violence cross the threshold?  The line is not necessarily well defined, and yet we do need to have language against these things.  Hate speech is no different in my view.

Judgement calls will have to be made, and I think the Team/Tribunal needs the AUP to allow them measures if they decide a threshold has been exceeded.

Ledeir posted:

This makes sense to me, having to provide a reason for why you downvoted. 

If you have to provide a reason, which can be verified is most cases, then you are less likely to simply downvote because you disagree... 

 

Yes - this idea gained a lot of community support in earlier discussions.

Robert Nicholson posted:

I'm very disheartened by this discussion, because I fear that the concerns are well founded.  I've seen one content platform after another become infested with bigotry, hatred, anti-science screeds, nationalism, and so on.  

I support the First Amendment - which says that the US government may not regulate speech.  But I am not a supporter of "free speech" on private platforms. This simply gives a license to extremists to spread their venomous beliefs.

I would strongly favor seeing the Narrative AUP extended to prohibit hate speech of any kind, with the penalty for violating the AUP being a permanent ban from the platform.  

I agree with you wholeheartedly. Being part of a group that is constantly being targeted for hate speech - and people get away with it because TPTB are so biased by ableism that they don't consider the slurs and the talk of eugenics to be hate speech - I'm on board for a full on hate speech ban. Slippery slope discussions don't interest me. Although because there is confusion about what constitutes hate speech, I'd be on board with a "three strikes" rule that provides a margin of error if the Tribunal goes too hard on someone one time.

I'm not convinced that downvoting something simply because you don't like it/don't agree is necessarily a bad thing. We won't be expected to justify liking content. Seems like a burden for interactions and moderating.

Gord posted:

I'm not convinced that downvoting something simply because you don't like it/don't agree is necessarily a bad thing. We won't be expected to justify liking content. Seems like a burden for interactions and moderating.

The reason is quite important to prevent brigading.  While a bit more cumbersome, it prevents people from downvoting stuff they don't agree with.  The quality score of a post / comment shouldn't be affected simply because a bunch of people from a different political / religious / <you name it> perspective don't like the content.  If someone doesn't agree with / like the content of a post / comment, they have several options, (1. do nothing, 2. add their own comment, 3. downvote if it is truly violating some site standards).  Good quality content will naturally surface because people will vote for it.  Bad quality content / comments will just stay neutral near the bottom.

Gord posted:

I'm not convinced that downvoting something simply because you don't like it/don't agree is necessarily a bad thing. We won't be expected to justify liking content. Seems like a burden for interactions and moderating.

My personal thinking on this is that we tend to favour constructive criticism in real company.  When you are with real people face to face, it isn't generally acceptable to just say: "I don't like your work", with no further information.  Do that, and people will just think you're a jerk and shun your company.  

In order to improve in quality, online interactions need to become a bit more like face to face ones in some ways.

If something is worth downvoting, then it is also worth providing a minimum of words to say why.  Do we actually care that someone doesn't like something, without any additional information?  I don't.  I think it is pretty useless because if they downvoted because they disagree, that's a completely different consideration to downvoting because they found something confusing.  

In practice, I don't think people will be upset that they have to add a few words to a downvote.  On the contrary, if you dislike something enough to take action, chances are you will be happy to be able to vent more explicitly about it.

The thing I am a little concerned with is I don't want this sort of functionality to dissuade people from commenting.  If you've already written a short sentence on your downvote, you might not then comment to discuss what you didn't like in more detail and interactively with other people.  That would be a big loss.

So perhaps the downvote needs to be a multiple choice from things like:

"Low quality"

"I disagree"

"Annoying"

"Far-fetched"

"Absurd"

That way there is still much more to say in a comment...

 The various responses could have different weights in the quality score.

" I disagree" should not affect the quality score at all.

"Annoying" should be lower weight than "Low quality"

Etc...

Since it's up to the community to decide what quality is and promote it, seems to me that real comments, and even discussion, is kind of expected.  I mean, on topics such as politics, it's obvious there'll be all kinds of differing/clashing opinions and claims - down voting here defeats the whole purpose of communicating - and just because there's disagreement it doesn't imply lack of quality. 

But it is a tough nut to crack. Quality is such a vague, relative term, and the means to track it, particularly in terms of rewards,  are pretty limited. 

Upvotes, even ones with comments could be manipulative on the one hand, and no vote at all doesn't reveal if it's due to the opinion there's a lack of quality - or the post wasn't even seen!

So I'm trying to look at it like damage control - 

Maybe don't have a down vote button. Just report if content is breaking established rules, someone one has gone too far in the comments, or content doesn't fit topic-wise. My take on the general intent is that moderators should be taking care of this stuff anyway, so reporting instead of down voting lets the niche owners know  quickly and directly if it's not getting done. 

My 2 cents, anyway.

 

 

Colleen Ryer posted:

Since it's up to the community to decide what quality is and promote it, seems to me that real comments, and even discussion, is kind of expected.  I mean, on topics such as politics, it's obvious there'll be all kinds of differing/clashing opinions and claims - down voting here defeats the whole purpose of communicating - and just because there's disagreement it doesn't imply lack of quality. 

But it is a tough nut to crack. Quality is such a vague, relative term, and the means to track it, particularly in terms of rewards,  are pretty limited. 

Upvotes, even ones with comments could be manipulative on the one hand, and no vote at all doesn't reveal if it's due to the opinion there's a lack of quality - or the post wasn't even seen!

So I'm trying to look at it like damage control - 

Maybe don't have a down vote button. Just report if content is breaking established rules, someone one has gone too far in the comments, or content doesn't fit topic-wise. My take on the general intent is that moderators should be taking care of this stuff anyway, so reporting instead of down voting lets the niche owners know  quickly and directly if it's not getting done. 

My 2 cents, anyway.

 

 

In my experience on other sites, down voting is used mostly when people don't agree with the content.  It's not a measure of quality at all.  

I like the idea of not having down voting at all.

If the content violates site rules, report it.

Ok. I am back from work, rested, and had an moment to collect my thoughts on this topic, and to do some research, including read the article that @Malkazoid  posted.

From the article, I see 2 points that the author Daniel Cooper lays out as problematic with Minds, that has led to the influx of hate speech on that platform.

Addressing the article, and how it isn't exactly like Narrative.

1. In the subtitle he states "The lack of moderation is a blessing for some of its users." Well Narrative does have moderators and I hope, by implementing a few steps, this will have a somewhat diminishing effect of the amount of hate speech that gets approved on niche topics. So far I think Malkazoid's Racism and my Equality, and the niche Soapbox may have the most amount of hate speech submissions. A niche needs to exist that specifically outlines hate speech and I am currently unaware if that type of topic exists. Without a niche to tag, within our current system, it can only be published on personal journals.

2. The other concern he mentions is that these type of posts are being promoted financially through earned awards. There was very recently a topic that we discussed boosted posts. Cant remember the title. Anyways we don't know if this is what featured posts are about, but I suggested that these boosts may very well be subjected to the advertising voting mechanism, which may combat the rise of hate speech on the platform.

Implementing Community Standards.

Okay regarding research. In mid April 2018, I first brought up the need for regulations on community conduct regarding hate speech, visually depicting hate speech (hate memes), trolling, and implementing standards regarding nudity (I think we need to add doxxing as well). There was some good conversation that started and then it fizzled out. I think it is well worth a read if your are new or newish to the platform. Because it is quite relevant to this current thread and it demonstrates that this hasn't just been brought up now. Nudity Community Standards Age Restrictions Lets Get This Right The First Time

Also, just this morning, I read this Medium team published article on Medium that lays down the ground rules for the community. I think it is fair, plainly written, and quite brilliant in it's comprehensive coverage. I think that if the @Narrative team has not already done this type of post for the launch day, then they need to asap. The TOS  is more about corporate liability, and therefore in terms as serving as a Community standards, feels way too loosey-goosey to .  Likewise, the White paper, and the new Specifications also does not adequately address Community conduct standards. Which is why we are slamming into these issues on the community board over and over again, because they have not yet been addressed. 

My Take On It

In the end I feel that we are all Content Entrepreneurs on this platform. We are here because we plan to make an income, or partial income, from either Content Creation, Moderation, Niche Ownership, and in most cases a combination there of. That means this is a business for all of us. That also means we can collectively  determine, that we can ask someone who is disrupting the growth as a business by spewing offensive hate speech, to leave our premise. In other words, to claim free speech is your right, actually does have nuances that are very much implemented in our society, both online and off. You cannot stop a person who is screaming hate speech on a street corner, unless they are reported to the police for disturbing the peace, or inciting harm or violence against another. But you most definitely can ask them to leave your business. I think it is important for this platform,  that we substantiate that free speech doesn't mean you have the right to say anything you want, anywhere, anytime. That includes the internet. This is just a fallacy. Hate Speech writers can purchase a domain, and say what ever they want on their private blog, or their own place of business. That is free speech.

Possible Solutions 

1. A plebiscite. If we are truly self-governing as described by the team, then we need to be able to set some standards as a community. Democratically. Not just by who can argue the loudest or longest. AND if we do that, then both the Narrative team, and the community needs to accept these standards, as the crowd decided. If this is the route taken, I see no reason (but time) why it cannot be made during the alpha stage. This is the most informed group to date about what is best for the platform, and I suspect represents the cross section of society just as much as it would 6 months from now. Then create a community Standard from the results of the plebiscite.

2. The team accounts for what the community has stated on this board and Create a Community standards that reflects the majority of the input. Publish it on day one, and be open to the fact that it will need to be a work in progress as new issues arise.

3. Put into the ToS 'no hate speech' (if it isn't already there), so that we can vote down niches, or report niches to the tribunal that turn into vehicles of hate speech. 

4. If the narrative team does none of the first three suggestions, then Owners MUST  be able to create individual niche community standards, with some accountable to the 'following' community. This will enabling  Moderators to reject submissions that do not conform to it.  If an owner does not want hate speech to be associated with their niche, on a business level, it is imperative that they are allowed to do so.

In conclusion.

Narrative as a platform must not take a laissez-faire approach to the community if we do not want our investments ( and I mean every level not just ownership) to be associated with hate speech. The platform must not impose that upon us, or one of too things will happen. It will fail to grow with the general public, or it will become inundated with offensive content.

Robert Nicholson posted: 

In my experience on other sites, down voting is used mostly when people don't agree with the content.  It's not a measure of quality at all.  

I like the idea of not having down voting at all.

If the content violates site rules, report it.

I do like the simplicity of this approach.  We would still get enough quality data from the upvotes, and reported posts.  

If we do adopt a system of boosting content, that would add more quality data points.  There are no limits on upvotes, so they can be given very liberally.  If people get a limited number of free boosts per week, each boost would weight more than each upvote...

At this stage I think I'm leaning heavily in favour of there being no downvoting at launch, if only to keep things simple.

Thanks for laying out the concerns and adding the Medium link @Emily Barnett

How comfortable is the community with liability responsibilities? Where is the line between what's Narrative (corporation) and what's Community responsibility If lawsuits arise who gets sued? Standards are needed - consistent ones ...

If @Narrative puts hate speech in the TOS, it could be done before launch.

But as a Community standard by vote, I don't think it can done until after launch, or at least put together so all can examine, but leave voting til after launch.

And leaving it to individual Niche owners would likely generate a lot of inconsistency, confusion and multi-level playing fields ...

 

Malkazoid posted:
Robert Nicholson posted: 

In my experience on other sites, down voting is used mostly when people don't agree with the content.  It's not a measure of quality at all.  

I like the idea of not having down voting at all.

If the content violates site rules, report it.

I do like the simplicity of this approach.  We would still get enough quality data from the upvotes, and reported posts.  

If we do adopt a system of boosting content, that would add more quality data points.  There are no limits on upvotes, so they can be given very liberally.  If people get a limited number of free boosts per week, each boost would weight more than each upvote...

At this stage I think I'm leaning heavily in favour of there being no downvoting at launch, if only to keep things simple.

 

Any votes, up or down, can be manipulated by briganding. 

I definitely think a reason justified downvote would make life easier for moderating as it can bring issues to the attention of the moderators. I only report posts when I'm absolutely sure something is wrong. I would be more likely to downvote if I was unsure and let the mods/community decide. 

The easiest solution is probably to automatically trigger an alert for a moderator any time a post sees too much action (like youtube does when it temporarily locks a videos hit count). Of course that would likely result in more work for the mods as a simple threshold alert will get triggerred more often than a downvote (racism) alert or similar. 

And until I have anything new to add, since I just noticed I'm pretty much just repeating myself, I'll go back to lurking on the thread.

Malkazoid posted:

Ok - so from my quick jaunt around Minds - it turned out to not be quite as bad as I expected.  Extremist views are definitely over-represented, but not to the degree I had imagined.

It may be that Narrative will suffer a bit less from it because our initial members don't seem to be cut from that particular cloth, so we'll start with something of a balanced first impulse.  

I think I agree with @Robert Nicholson though - forbidding hate speech would be a good precaution.  Having that in the AUP from the outset doesn't mean enforcement has to take place at the first signs of extremist presence - but if things start to get out of hand, a pre-determined plan of action can kick in without anyone being able to say the rules were changed.  

@Banter's comments are important - but ultimately that slippery slope exists with all terms of the AUP, not just hate speech.  Where does bullying cross the threshold?  Where do threats cross the threshold?  Where does inciting violence cross the threshold?  The line is not necessarily well defined, and yet we do need to have language against these things.  Hate speech is no different in my view.

Judgement calls will have to be made, and I think the Team/Tribunal needs the AUP to allow them measures if they decide a threshold has been exceeded.

I think that threats or incitement to violence is zero tolerance - not up on US law but this kind of thing in public venues in Canada is turned over to law enforcement.

Colleen Ryer posted:

Thanks for laying out the concerns and adding the Medium link @Emily Barnett

How comfortable is the community with liability responsibilities? Where is the line between what's Narrative (corporation) and what's Community responsibility If lawsuits arise who gets sued? Standards are needed - consistent ones ... 

I don't believe there is a liability issue here.  Under prevailing law in the US (Section 230 of the Communications Decency Act of 1996), website operators enjoy broad immunity from liability for 3rd-part content:

"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider"

For more information, see https://www.eff.org/issues/cda230

Ledeir posted:
Malkazoid posted:
Robert Nicholson posted: 

In my experience on other sites, down voting is used mostly when people don't agree with the content.  It's not a measure of quality at all.  

I like the idea of not having down voting at all.

If the content violates site rules, report it.

I do like the simplicity of this approach.  We would still get enough quality data from the upvotes, and reported posts.  

If we do adopt a system of boosting content, that would add more quality data points.  There are no limits on upvotes, so they can be given very liberally.  If people get a limited number of free boosts per week, each boost would weight more than each upvote...

At this stage I think I'm leaning heavily in favour of there being no downvoting at launch, if only to keep things simple.

 

Any votes, up or down, can be manipulated by briganding. 

I definitely think a reason justified downvote would make life easier for moderating as it can bring issues to the attention of the moderators. I only report posts when I'm absolutely sure something is wrong. I would be more likely to downvote if I was unsure and let the mods/community decide. 

The easiest solution is probably to automatically trigger an alert for a moderator any time a post sees too much action (like youtube does when it temporarily locks a videos hit count). Of course that would likely result in more work for the mods as a simple threshold alert will get triggerred more often than a downvote (racism) alert or similar. 

And until I have anything new to add, since I just noticed I'm pretty much just repeating myself, I'll go back to lurking on the thread.

If a threshold trigger on comments is possible, I agree this would be pretty useful for moderators.

Shouldn't be needed on already moderated posts . And asking mods directly about a post if it seems iffy, rather just down voting?

And, too if unsure then asking mods opinion - rather than just reporting would be an idea, since I think reporting has a penalty if it turns out not to be justified?

 

Colleen Ryer posted:

I think that threats or incitement to violence is zero tolerance - not up on US law but this kind of thing in public venues in Canada is turned over to law enforcement.

yep. No idea in the USA, but this is how it is handled in Canada... That said, we still have a tremendous amount of racism in Canada too, and the RCMP was actually started to safe guard caucasian pioneers against First Nations people and to impart dominance over them through policing measures, so we do have some systemic, and historical amounts of racism still visible in our rcmp still today, towards FNP. But that is another topic..... 

Robert Nicholson posted:
Colleen Ryer posted:

Thanks for laying out the concerns and adding the Medium link @Emily Barnett

How comfortable is the community with liability responsibilities? Where is the line between what's Narrative (corporation) and what's Community responsibility If lawsuits arise who gets sued? Standards are needed - consistent ones ... 

I don't believe there is a liability issue here.  Under prevailing law in the US (Section 230 of the Communications Decency Act of 1996), website operators enjoy broad immunity from liability for 3rd-part content:

"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider"

For more information, see https://www.eff.org/issues/cda230

Thanks @Robert Nicholson. great link! So we only have to worry about getting sued over our own content ...

"Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection"

It's the criminal issues, where hate speech crosses the line ... again, not familiar with US law - but will have to get that way, obviously  

After reading through these discussions I'm really leaning heavily toward no downvoting, at least initially, and see how it goes.  If we need to, we can put in a 'report link' for violating TOS.

Colleen Ryer posted:

Oh, I've just read the reputation doc again and see that downvoting automatically triggers a list of choices :

  • Content Violates AUP (Porn, Copyright Infringement, Illegal)
  • Disagree with Viewpoint
  • Not Written in Proper Language
  • Low Quality Content

Here's the link FYI https://spec.narrative.org/v1.0/docs/reputation 

The part detailing how down voting is dealt with  is a ways into the doc 

Well that was pretty darn helpful! Thanks @Colleen Ryer

Add Reply

×
×
×
×