In response to my post from a week ago, Greg Harmeyer responds:
While an interesting hypothesis, I’m not sure I buy the idea that people don’t collaborate because it reveals the weakest links. By that logic, the strongest links would collaborate and a sort of Prisoner’s Dilemma would cause all to collaborate as much as possible: even though I’d rather not collaborate and hold my knowledge to myself, that makes me appear to be a weak link and thus I’ll share my knowledge to the extent possible. The strongest links have the most to share and therefeore are very willing to share it because of the impression it leaves. In fact, I would suggest it’s this kind of rationale that has acted as the basis of incentives in environments where KM is successful. [via Greg Harmeyer’s weblog]
I’m not sure Greg and I are really on the opposite sides of this argument. On the one hand, I was talking about reasons why many opt out of traditional collaborative environments. I said that they avoid participating for fear of their lack of contributions becoming obvious. Greg is suggesting that the strongest links have the most to share. No argument here.
But we’ve got an interesting dilemma: the people who have the most to contribute have the least to gain, and the people with the least to contribute have the most to gain. The weak links (insofar as they’re contributing very little to the organizational knowledge base) will benefit disproportionately.
The question is ultimately whether the visibility and recognition that comes with contribution are enough to encourage future participation. I’m not sure that it is, unless there’s some way to measure and/or quantify this visibility. Look at Amazon.com’s reviewer system – one of the keys to Amazon’s business model was its creation of a community of book buyers. Adding the ability to review books and share those reviews with others was a critical component of that community. Yet it was only after visitors could rank the usefulness of reviews that reviews increased dramatically. Why? My guess – that the value of reviews was now quantifiable – and contributors were now rewarded with measurable recognition.
Maybe the difference between recognition and measurable recognition is a semantic one. But I like the broad notion of aligning compensation and rewards with with the overall business objectives. If you want me to contribute to a KM system, make it worth my while. Show me that increased recognition has some tangible benefit. (Note that this doesn’t necessarily mean that I get paid more, but certainly most will take that route.) This has a number of very positive effects: it demonstrates executive-level commitment (if management isn’t willing to put its money where its mouth is, why should the grunts?), it erases any question of what the goals are, and it gives individuals a tangible reason for working towards the larger organizational goals (without sacrificing any individual objectives).
If you can’t somehow align a reward system with the success of the KM system, I’m willing to bet that the KM initiative will fall far short of its goal. I think measurability is the key. In this sense, the RCS ranking system (whose sites are most popular) and Blogdex (how many sites link to you, how many sites you link to) start to show what kind of metrics could be added to a Radio k-log environment. Visits indicate overall popularity, but that needs to be tempered with how many sites in your environment are actually linking to you. So the formula for determining a site’s overall “rank” would look something like:
(visits) * (links in / links out)
RCS measures the first, Blogdex measures the second. Even as I write this, I think there’s holes. But I’ll leave those for another day. It’s at least a useful exercise to think about…