Congratulations on the speedy implementation of reputation in the alpha. The secret sauce has been poured!
I'll probably have more thoughts as the mechanism of Rep unfolds (not by disclosure but by observation - I understand that the Team won't reveal any secrets, and rightly so).
But here are my initial thoughts:
We are but children starting our journey on Narrative, and it is fitting that our reps should be low at this stage! Relish the sensation of being puny, and look forward to the journey of growth ahead!
Whilst I don't want to jump to conclusions on how the algorithms stand, I will share a suspicion that they don't yet take into account outliers.
Here's an example. Imagine Tony.
Tony isn't your average user. He isn't good at content.
He just isn't.
Let's blame poor spelling and grammar, hurdles he has struggled with unsuccessfully all his life. Let's blame a lack of sensitivity to story structure and rhythm, coupled with a complete lack of humor and a very regrettable penchant for abusing alliterations. All. Over. The shop.
Most people perceive his pieces as peerlessly pathetic, pitiable portions of pumpkin pie (and not from palatable pumpkins - think putrid pumpkins).
But here's the thing. Tony is the most committed voter on the platform. He takes his civic duty very seriously, and his voting record, both in number AND quality of votes, is stellar. He performs about 15% higher than the person in second place.
How do we value Tony?
If an algorithm hard caps how many points a person can earn for any given category of reputation without considering the role of outliers in that category, Narrative as a whole will lose out.
If the maximum points someone can earn for voting, for instance, is 20, the fact that Tony is a full 15% more valuable to the network than the second most diligent voter - and perhaps 4 times more valuable than the average voter - is completely lost in the equation.
Lets imagine that a further 40 points can be earned for content creation and quality: and Tony scores a 2 out of 40. Betty on the other hand makes fairly good content, fairly frequently, and scores a 23 out of 40. She is an average voter and scores 10.
Results so far, barring other aspects of reputation?
These results could dangerously underestimate the value of outliers. In an election for the Tribunal, where diligence matters most, for instance... Betty might seem to be the better candidate, despite her forte being in content creation, and not in civic duties.
Also, crucially, a system functioning in this way will not encourage people to play to their strengths. Tony would see his reputation breakdown, and once he has climbed out of his deep, soul-wrecking depression over it, might decide to cut his voting efforts in half, instead diverting half his time towards meager improvements to his writing. This shift drops his voting performance to good, but not stellar levels (score dropping from 20 to 16), while his content score jumps from 2 to 9.
Tony smiles: he now has 25/60 (up 3 from his previous score).
But Narrative should be frowning. Is upgrading terrible content to merely mediocre actually worth ANY loss in Tony's strong suit: voting more than four average users combined, and consistently delivering better decisions?
The network was better off with him as an outlier, doing what he does best.
We wouldn't hire Einstein to be a cook, right? And we'd laugh at the notion of a studio asking James Cameron to split his energies between directing, and working in the studio's legal department.
The mathematical implications of this are simple and powerful: we must allow people to be specialists, with a higher ceiling of points achievable within their own strong points.
Exactly how this is achieved in the algorithm is none of my business. Precisely which balance is sought by the architects of this system is entirely their prerogative, but it is undeniable that we need to seek to achieve that balance.
I can say that the tweaking of this balance will best be achieved by systematically graphing outlier data, and determining how much Narrative relies on a smaller group of over performers to achieve excellence in its various metrics.
The more a metric depends on a smaller group of outliers to achieve critical quality, the more that metric must reward the outliers... within reason. The relationship would be proportional, but not directly proportional. And depending on how strongly we want to dampen tendencies towards over-specialization, the algorithms can shave a bit off this influence as a final adjustment in this code function, or another function can work to offset its effects somewhat.
The consequence of not modelling for this is very predictable. Overall network quality loss. Our specialists will divert too much energy away from what they are best at, and all areas of Narrative will lose of their edge, while making much smaller gains on the bottom end, compared to what has been lost on the top.