Cancer and DNA (requires Real Audio player) ::: Driving home last night, I caught this report on NPR about a couple of researchers who are trying to identify genetic causes for various forms cancer and other diseases. According to Michael Myerson, the researcher interviewed for the story, “the human gene sequence database contained a lot of things that weren’t gene sequences.” Myerson and the other researchers used the human gene sequence database to help find new microbes that cause diseases. They looked at a total of 7,000 sequences, and found 22 that didn’t match known human gene sequences, then narrowed it to two that matched the human papilloma virus. With that, they were able to identify the microbial causes for cervical cancer. (There’s more lab work to be done to confirm the findings, but you get the idea.) An abstract of the article is available here, the full text of the article is available but requires a paid subscription.)
What intrigues me about this report isn’t the medical breakthrough these researchers have made. Interesting, sure. But it’s a bit beyond the scope of my interests. What is really interesting to me is the notion that the breakthrough was a completely unanticipated result of the collection of information in the human genome database. The purpose of collecting the gene sequences in the database was to map the human genome – and the info that made up the core of this research project was an afterthought of that goal.
This is a great example of the premise that you just don’t know what you’ll want to know down the road. It seems that one of the things any knowledge management initiative must do in order to succeed is to ensure that as much information is captured as possible – because there’s no way you’ll be able to predict which pieces of information will be useful to you a year from now. If that information is collected, then you may be able to use it. If it’s not, you won’t.
John Robb, President and COO of Userland Software, maintains a Yahoo! Group called K-Logs. The idea is that weblogs (like this one) can serve a KM purpose. As weblogs are used throughout an organization, the collection of observations, links, and articles will form a collective body of work that people can use to share knowledge, search knowledge, and learn. (As an aside, I use blogger.com as my tool of choice for updating this web site. Userland just released Radio 8, their new weblogging software package. Blogger.com is geared to periodic writing, Radio 8 seems more cleanly geared to the quick collection and sharing of observations. Also, there’s some technology packaged in the software that ensures that others will be better able to monitor sources of information they’re interested in. The notions of information, editors, and finding what you want are all intriguing… but will be covered later.)
In any event, a number of KM systems have focused on the synthesis of information – the attempt to make machines capable of distilling information and getting it to the right people. Robb’s approach (which he calls k-logging) is to shift the synthesis back to the people, and make the systems simply a mechanism for collection of the information. Can’t say that I disagree with him on that point.
However, in order for an organization to benefit from this setup, they have to have a culture that encourages and rewards learning (this may sound obvious, but how many organizations are truly wired this way?) and a leadership structure that recognizes the advantages to capturing and sharing this information. Without those two, no efforts at capitalizing on a KM strategy will work. I think weblogs provide an attractive mechanism for collecting the information. The next step is to make sure that the organization can do something with it.
As I read the Business 2.0 article on KM that I briefly mentioned last week, I started thinking that the author actually proves my point. (Thanks also to Sean Roche, who e-mailed several thought-provoking points on the subject.) The author points out that both CasePoint at a customer call center and the intranet at PWC were less than successful – and attributes the failure to a lack of interest on the part of individuals in investing time with the computer. “Most people go down the hall,” he reported.
But this doesn’t necessarily indict KM per se – it just indicts those examples. Isn’t it possible that the failure was attributable to the highly structured nature of the systems that were built? If the information were stored, and there was a Google-like interface that allowed you to intelligently sift through the data, wouldn’t that make it far more easier to embrace? (Come to think of it, that’s the connection to yesterday’s post about Google’s ability to render domain names moot…)