Some of the comments on my first blog raised questions about expertise. Patrick Lambe wondered if it was possible for people in the KM community to develop expertise. I responded in a comment to Patrick but I think it may be useful to examine what criteria we use to assess experts.
Also, Wayne Zambergen asked directly What is expertise?
I believe we can reduce the confusion by matching expertise to the complexity of the domain. We can’t expect the level of mastery from a weather forecaster that we do from a chess grandmaster. And we can expect even less from a Knowledge Management specialist.
According to Bloisot and Snowden, one way to describe domains is to see if they are ordered, complex or chaotic.
The weather system is a complex domain. Chess is an ordered domain. We expect different levels of expertise from forecasters and grandmasters. Stock markets are complex and perhaps chaotic. The world facing knowledge managers is more complex than the world facing weather forecasters. Therefore, the level of expertise and kind of expertise will be different than we would find in chess or weather forecasting.
Compared to these other domains, for knowledge managers the ease of forming and discriminating categories will be much lower, and the number of categories proposed to make sense of a phenomenon will be much higher (less abstract). I invite readers to study Patrick’s wonderful book “Organising knowledge: Taxonomies, knowledge and organisational effectiveness.”
My assertion is that we should match the expected level and kind of expertise to the nature of the domain. Yes, it does make sense to talk about expert knowledge managers, but what we would expect from an expert KM would be different from what we would expect from a chess grandmaster.
Factors that will determine the level of expertise that can be achieved in a domain:
We should also consider the kinds of knowledge experts would have. This is a fairly standard list we have all been using over the years. All except the first involve tacit knowledge.
How to evaluate experts? There are several kinds of criteria we can use.
For KMs, the relevant criteria are the comparative standard, the social standard, and the organizational standard. The normative and analytical standards don’t seem to apply. The outcome standard is too flimsy to be reliable but might be used if you can gather reasonable data on the KM systems developed.
The organizational contribution may be very important for KMs. In a complex/chaotic world we can’t expect the KMs to predict the future very accurately. But they can raise issues and identify discriminations that can help policymakers make decisions. They can warn people from applying practices that have worked in the past in similar projects but would fail because that there is a requirement here that is markedly different from the analog cases.
What we could expect from expert KMs? Here are some potential metrics.
Finally, how can KMs develop expertise?