I posted this as a response to Molly O's post notifying us about content protections, but I think it got missed in there. I think that it is important to continue the discussion, so I'll repost it here.

This post is in response to the article @Ted wrote here

Thanks @Ted for bringing clarity to the Access Protections question.  While I commend the idea behind restricting R content to KYC users to protect children...I think this is a big mistake.  As a content creator you would be INSANE to flag anything as R, even though it may be, because it would drastically reduce your available audience right out of the gate.  I feel very confident that the vast majority of people visiting the site to read content will be anonymous and not members, let alone KYC members.  I think something this drastic should only be implemented once it becomes an issue.  You could have a lower barrier to entry by forcing an age check within signup process, and thus absolving yourselves to some degree, like a minor visiting a liquor site. If they were anonymous and coming in to see 'R' content, you could force the same age check and store it in a cookie. 

    While content should certainly have ratings, it is such a subjective thing.  I think for some content, the community is much more likely to agree on, like 'Porn, erotica, etc'.  The flip side to that of course is 'adult' articles...how about an article about war?....crime?..violence?  Those are certainly adult topics, yet likely not as offensive as the 'porn / NSFW' flag.  When you have to lump everything Adult into the same category as 'erotica', and put a KYC restriction on it, I think that is a big problem.  I think a compromise would be to create a 4th rating, NSFW (not safe for work, or NC-17).  I would be fine with putting that content behind KYC, but to be honest, I don't really see people necessarily browsing that kind of stuff when they aren't anonymous anyhow....so I imagine that would gradually kill off that type of content.

    Quality Rating - I understand the purpose of the quality rating....I'm just worried it will be potentially abused.  If it would function essentially like a reddit 'up vote / down vote', you could easily see people just down voting the quality of stuff they don't like.  I bring up the obvious political climate.  If someone writes a pro-trump article and the site is full of mostly liberal users, they would likely hammer the article on Quality.  Conversely, the same could be said of any hot button topic in our current culture.  You know...as I'm typing this it just occurred to me...a brilliant solution would be to just have 'up votes for Quality'.  This way, an article couldn't get buried by a bunch of haters.  The article would stand on its own merit, if people enjoyed it, they would vote to increase the Quality score of the content, otherwise, an unpopular article would languish in obscurity, which I think is the ideal scenario.

Original Post

Thank you @banter. I have to admit I saw some issues with the potential system too, so I am very happy you brought this up.

I am not crazy about the downvote. My concern, like yours, is that it will be used in targeted fashion towards people, and nothing to do with quality. In theory I like the idea of quality but perhaps it should be an additional thing, like how Medium has claps...maybe the quality is an extra boost just for quality. Of course this can also be abused based on subject matter but this would be less harmful.

Here is a direct example of how i think the downvote for quality gets abused. Writegal comes along and posts an opinion piece on the #metoomovement and the scandal around Lewis C.K.  She posts it to Feminism, Equality, and Social Justice. Well hypothetically there is a fanbase niche for Lewis C.K. and they think the claims are "fake news" don't want to believe it and they all target Writergal's article to be downvoted for quality. Now she wrote an excellent article, fact checked, grammar checked and spellchecked, but her reputation has been hit,  her article has been hit, and the three niche's took a hit. 

this doesn't seem too unrealistic to me. 

Just gonna tackle one point of your post at a time.

What do we think about moderator involvement in the quality vote?

If folks have to quickly grade on a few criteria from many (say minimum 2 criteria), then moderators could be alerted by an algorithm to posts that are receiving uncharacteristic quality votes, and could penalise folks who are giving blatantly disingenuous scores.

For instance a piece which moderators approved for the niche specifically because it was so well researched and clearly written, could give a rep penalty to people giving abysmal scores in 'Clarity' and 'Research quality'?

The combination of a moderator possibly discovering a dishonest quality vote at any time, and the quality voting demanding a tiny bit of thought about criteria, would heavily dissuade abuse.  This would not have to overload moderators: even nabbing people only occasionally would be a strong dissuasion, in my view?

I still like the idea of the 'upvote / clap' without the downvote when it comes to a quality score...you don't get claps/upvotes...your quality score stays low.  I think the benefit of this is that it doesn't have any overhead on moderators or someone else to make sure people are behaving / voting in good faith.  I think the algorithm could be worked in such a way that content or comments a user submits that don't reach a minimum threshold of 'upvotes / likes' over a given time period would get their reputations penalized.  These submissions would be categorized as being poor quality because people didn't vote for them enough.  An interesting side benefit is that it would encourage people to participate and be more interactive with the site, since voting would really be important.

@Emily Barnett and others, I look forward to your thoughts on my concern regarding the KYC barrier for R rated content.

So, @Banter - first off, thanks for making this thread.

I actually hadn't seen @Ted's article, and this is all important stuff.

I'm tied up with work all weekend so this first post will be brief and in response to something from the article.

"And if creators fail to set a proper age rating their personal reputations will be dinged."

This sentence raises again, for me, the need for there to be several facets of reputation.  These facets can be summed up to create a global reputation score where necessary, but in most cases, the individual facets will be more useful.

"Self-rating" or "Age-appropriateness self-rating" should be a facet of reputation, and only that facet should get dinged when someone fails to match within a reasonable threshold, the crowd's ultimate decision.

There is no reason why failing at this should affect aspects of a user's experience that have nothing to do with the ability to judge age-appropriateness.

I hope that the reputation system is being built with multiple reputation categories underpinning the general reputation computation.  It would be nice to hear from @Brian Lenz or @Michael Farris about this.

Re: downvotes--

Without a down-vote option, it becomes very difficult to stop trolls, spammers, and other bad actors. Keep in mind that the impact of an up or down vote is based on your reputation.  Thus, the sheer numbers of up or down votes is not as important as the reputation of the people doing the voting.

@Banter Thanks for your post... and i hear you about the KYC requirement to access "R" content.  It's tough and definitely something we debated internally. 

The issue is, if you choose not to prove age for content that targets adults, then you will definitely be exposing such content to children.    I don't even know what the community will consider R-rated content to be but if the community puts an "R" label on something then, in its opinion, the content should not be available for children. 

If we fail to require proof of age then honestly there is no point in even age-rating content at all because all content will always be accessible to everyone, regardless of age.  

There is no doubt that this point is the single most difficult part of the "content access protections" system described in my blog post.  I think most people will want to verify their accounts though, especially if there are monetary incentives to do so (verification leads to higher reputation which leads to more rewards). Ultimately, I think that, while it is definitely a barrier to require verification to access "adult" content, it is necessary if we want to protect children AND there should be sufficient incentives to encourage people to verify.

Appreciate everyone's feedback here!

Ted posted:

Re: downvotes--

Without a down-vote option, it becomes very difficult to stop trolls, spammers, and other bad actors. Keep in mind that the impact of an up or down vote is based on your reputation.  Thus, the sheer numbers of up or down votes is not as important as the reputation of the people doing the voting.

I believe it was mentioned somewhere else that your comments wouldn't even be seen if you didn't have a minimum reputation score...so I think that would prevent the spammers / bad actors, unless they just wanted to waste all of their built up reputation in a few hate filled rants.  I still share @Emily Barnett concern of targeting a community or individuals comments.  If you get enough of your community members with good reputation to simply downvote 'their adversaries' content, they wouldn't be penalized if enough of them are doing it as it would be seen as a major group of people think this is poor quality', even if it really isn't.

I guess one possibility that @Malkazoid, brought up was involving the moderators.  I think there should be spot checks done on comments and posts that are getting their quality significantly downvoted to ensure what I mentioned above isn't happening.

Another possibility, like we have done with Niches, is that if you are going to downvote a comment or piece of content, you have to supply a reason.  It shouldn't be as simple as clicking a thumbs down.  That would also raise the barrier of people just clicking 'downvote' and require some thought behind it, which could certainly be verified.  If someone is down voting something as spam, and it clearly isn't, then you could just penalize all of those downvotes.

This is so tricky...I am still sticking with the line of discourse just on downvotes for the moment.

Ok so steemit has this, and it gets abused by some whales and even the owner Ned. I just saw a whole article on steemit about the abusive nature of the downvotes. Basically for any who may not know...only people with a higher reputation can can downvote...so the people with clout have all the power to demonatise a post. This happens on steemit. For competive reasons, and in the article i read, was about a guy criticizing steemit for not doing anything to improve the platform since it launched....the owner ned downvoted and the guy lost all his earnings.

I think, especially given that this already happens on existing platforms for censorship and monetary oppression, the team really needs to get this right. And it really needs to include your community. If the systems just come from the team without serious consideration from your early adopter members, i don't feel you will be getting a very good cross section of information.

 

In terms of the downvoting I believe that no matter how many fail-safes you put in place it could still be abused to some extent. 

I like the idea that you cannot downvote something but only upvote - but I think the flag needs to exist for Spam and trolling. But again, even that can be abused. 

I would envision it working something like this though:

You have Niche Owners, and Moderators - So, if a particular piece of content gets flagged, it goes to the moderators/niche owners inbox, the moderator can see how many flags it gets and can decide to ghost/delete the content - the niche owner would get final say though I suppose?

However, this is still getting into very murky water because then you can get into censorship that isn't warranted - like for example if a Niche owner/moderator really doesn't personally like a point of view on someone's writings they could flag them and make them go away then... But then that could be appealed to the upper appeals right? 

As to rating content - I can't see any real way to do this.. I'm guessing it's to limit liability right? But really, if a 13 year old kid wants to read content he'll find a way around the system anyway. Also, people self rating content could inadvertently limit their audience without intention. 

Example: R ratings are very different in the US and say France. 

How would the KYC even work for content? As an outside user they may or may not want to sign up for the platform to read content, so they will just be blocked then? 

Most websites put that age thing, but any kid with standard math skills can bypass it. 

@chrisabdey raises something I'd been silently thinking about: aren't most sites a little less worried about strict rating enforcement?

I'm not saying this because I think it is wrong to try to enforce things more than the average social network does... rather I want to know whether people agree we're really going far above and beyond.

In which case the question is, are we going too far above and too far beyond?

I get @Ted's reassurance that people will be motivated to verify their account with KYC, and that helps a lot.  People accept that to use a social network properly, and in some cases, at all, you have to register.  Once you're registered and you start frolicking in the sea of content, you'll pretty quickly come up against R rated materials and you'll think - hey, all I have to do to see this stuff is to verify my account.  Of course you're reminded in that moment that doing so will also boost your rep and earn activity points...  I think most people will do it.

But what if 'most' is only 65%...  How much do we care about the 35% who don't?  That's a significant amount of traffic being lost for rated materials.

How fraud resistant is KYC?  Honestly asking...

Bottom line

Here's my current thinking.  If you look at the internet as a whole, there's a huge amount of objectionable material easily available to kids.

Generally speaking, households know this and employ strategies to defend their kids.  Filtering software comes to mind.

Should Narrative try to go much further than other networks have, and in so doing, hurt its own growth?

I'm leaning towards no...  but am open to having missed factors that would make it important for us to do so.

I agree with @Banter's argument in his original post: stricter controls should probably only be put in play once something becomes an issue.  This means Narrative can enjoy optimal traffic when it needs it the most: its initial growth phase... but can be ready to step in and restrict things further if need be.

Emily Barnett posted:

This is so tricky...I am still sticking with the line of discourse just on downvotes for the moment.

Ok so steemit has this, and it gets abused by some whales and even the owner Ned. I just saw a whole article on steemit about the abusive nature of the downvotes. Basically for any who may not know...only people with a higher reputation can can downvote...so the people with clout have all the power to demonatise a post. This happens on steemit. For competive reasons, and in the article i read, was about a guy criticizing steemit for not doing anything to improve the platform since it launched....the owner ned downvoted and the guy lost all his earnings.

I think, especially given that this already happens on existing platforms for censorship and monetary oppression, the team really needs to get this right. And it really needs to include your community. If the systems just come from the team without serious consideration from your early adopter members, i don't feel you will be getting a very good cross section of information.

 

Keep in mind that this is not steemit.  Reputation in Narrative is determined by your actual actions and has nothing to do with how many tokens you own/control.  Everyone is on an equal playing field.  Thus, you should not be considered by a few people determining things or censoring content.

chrisabdey posted:

In terms of the downvoting I believe that no matter how many fail-safes you put in place it could still be abused to some extent. 

I like the idea that you cannot downvote something but only upvote - but I think the flag needs to exist for Spam and trolling. But again, even that can be abused. 

I would envision it working something like this though:

You have Niche Owners, and Moderators - So, if a particular piece of content gets flagged, it goes to the moderators/niche owners inbox, the moderator can see how many flags it gets and can decide to ghost/delete the content - the niche owner would get final say though I suppose?

However, this is still getting into very murky water because then you can get into censorship that isn't warranted - like for example if a Niche owner/moderator really doesn't personally like a point of view on someone's writings they could flag them and make them go away then... But then that could be appealed to the upper appeals right? 

As to rating content - I can't see any real way to do this.. I'm guessing it's to limit liability right? But really, if a 13 year old kid wants to read content he'll find a way around the system anyway. Also, people self rating content could inadvertently limit their audience without intention. 

Example: R ratings are very different in the US and say France. 

How would the KYC even work for content? As an outside user they may or may not want to sign up for the platform to read content, so they will just be blocked then? 

Most websites put that age thing, but any kid with standard math skills can bypass it. 

As currently proposed, there would no way for an underage person to access "R" rated content unless of course someone with a verified account (who is older than 18) gave them their credentials. Thus, while it is true that an underage user could "find a way", the system would be doing everything it could to limit access to that content.    

The way things would work would be that all R rated content would be inaccessible for anyone who is not signed in with a verified (18+) account.  

It sounds like the KYC restriction for R+ content isn't currently up for debate.  If that is the case, then I think anyone who is the proud owner of a Niche devoted to R+ content like (erotica) should have the option of a refund since this will have a large impact on their traffic.

If we decide to go this route as a community, I think it is imperative that we track and notify our community of the stats of people who arrive at an article that is restricted by KYC and abandon, rather than verifying their account, if they even have one.

I agree with @Malkazoid's suggestion that we should only put this KYC R+ restriction in place if it becomes an issue once the community is larger.  We have already explicitly said we won't allow porn.  I don't think we need any other safe guards at this point.

I've been a little pre-occupied as of late and not able to dole out elegant responses that solve everyone's problems as often as everyone probably wishes I would be...

As suggested above, we probably need a couple layers of "voting" to provide some better control over its potential abuse. There's evaluating content. Then, there's evaluating how people vote on said content. And, the more meta the speculative future-proofing gets, the more micro-managerial the Moderator, or Niche Owner, roles become. 

Perhaps, for egregious trolling, votes can be suspended... or, maybe their are two "states" for content. Approved by moderators means downvotes aren't an option. Submitted to Niche, but not (yet?) approved can be down voted?

Or... is this just over-complicating something that, regardless of the measures put in place, will be gamed in some manner, shape, form? I mean, when a rat wants in, it finds a way.

I think we need to do due diligence to not deploy something with obvious weaknesses, but all boats take on some water... 

/quick-rambling-thoughts

Ted posted:
Emily Barnett posted:

This is so tricky...I am still sticking with the line of discourse just on downvotes for the moment.

Ok so steemit has this, and it gets abused by some whales and even the owner Ned. I just saw a whole article on steemit about the abusive nature of the downvotes. Basically for any who may not know...only people with a higher reputation can can downvote...so the people with clout have all the power to demonatise a post. This happens on steemit. For competive reasons, and in the article i read, was about a guy criticizing steemit for not doing anything to improve the platform since it launched....the owner ned downvoted and the guy lost all his earnings.

I think, especially given that this already happens on existing platforms for censorship and monetary oppression, the team really needs to get this right. And it really needs to include your community. If the systems just come from the team without serious consideration from your early adopter members, i don't feel you will be getting a very good cross section of information.

 

Keep in mind that this is not steemit.  Reputation in Narrative is determined by your actual actions and has nothing to do with how many tokens you own/control.  Everyone is on an equal playing field.  Thus, you should not be considered by a few people determining things or censoring content.

With all do respect @Ted saying that Narrative is not Steemit means nothing. We are, and I am specifically discussing human nature, which has a strong tendency among a percentage of the population "to beat the system". So unless your reputation system plans to change human nature, I don't see how your position that Narrative is different holds  a lot of water. There will be abuse. My suggestions are about mitigating the abuse if possible and that means looking at patterns that already exist.

 

That said, Ted are you suggesting that every downvote is equal no matter what the reputation score is of the downvoter? I was under the impression that higher reputation gave more impact to an upvote or downvote...which is why I brought up the comparison to Steemit. Because then it is a similar situation. Narrative would have reputation whales.

The reputation engine is a black box to us in the community - but I think what I'm hearing from @Ted is that the reputation system will address some, most, or all of these concerns, and that is what sets Narrative apart from Steemit, in a substantive manner rather than a mere statement of difference?

Yes, @Malkazoid  -- reputation will be the linchpin of the system.  I'm confident that our reputation system will be unique and innovative, not in relation to SteemIt, but compared to any other network that I've ever seen.  Reputation is really an afterthought for most systems; it's going to be a defining, core element in Narrative. 

Add Reply

Post
×
×
×
×