James Dellow hopes that this paper will will cause a bit of stir in the knowledge management community. The following extract from the paper’s summary may explain why:
we find that sharing codified knowledge in the form of electronic documents saved time during the task but did not improve work quality or signal competence to clients. In contrast, sharing personal advice improved work quality and signaledcompetence but did not save time. Beyond the content of the knowledge, process costs in the form of document rework and lack of advisor effort negatively affected task outcomes
Now for many of us that is not a revolutionary idea, especially those members of the community who have consistently challenged the codification strategies that grew out of Nonaka’s SECI model. So its nice to have a paper with the authority of Wharton and INSEAD authors to make the point. That said I think we really need to take this paper with a pinch of salt.
I have several reasons for saying this, and to be honest similar points could be made in whole or part about many academic papers in this field, or hypothesis research in general. So, what are my concerns:
- Like much of the early KM case studies this work has been done in a professional services firm where codification in various forms and peer relationships from team work are very strong, so we need to be careful in drawing more general conclusions.
- The survey was done on sales teams who come together for specific periods of time with a task focus and deadlines with the various pressures of win/loss. In a professional services firm making partner (and becoming very rich) tends to depend on sales so this is a special environment.
- Understanding win/loss team differences is problematic. Back in the 1990’s I was doing research within IBM sales teams (Outsourcing so it is similar) which demonstrated huge recall differences between teams who had won bids and teams who had lost. They have tested the responses on win/loss/in progress teams and find some interesting differences that to my mind are not explored enough.
- They compare document use with personal advice. In that context, if those were the only options, then conclusion carries no surprise. However their conclusion does not necessarily follow as there are more options available. For example if the material is held in a more fragmented form (see my recent keynote) then the codified material is both cheaper and the recall mechanism is loser the nature of verbal or personal enquiry.
- They rely on surveys, which in effect have to contain the hypothesis and thus indicate to the subjects of the research what is being sought. I am pretty sure that if they had carried out ethnographic observation of team behavior they would have seen radical differences with that reported in their surveys. In fact the frequency with which observed behaviour does not match reported behavior should really kill off surveys in this field, but without surveys where would the average academic get their publications from?
- They only surveyed team leaders, rather than team members. Now this could be a major issue as the experience would be very different. This statement from the paper says it all: We surveyed the team leaders only both to reduce the cost and effort involved and because our interviews indicated that it would not be preferable to use aggregated team member responses, as this would require weighting the responses of marginal and central team members equally
- They fail to consider the scenario of advisors using personal document stores (or corporate ones) to support their advise, in effect the use of the Librarian function to synthesis written material with contextual human need.
Overall its a good and useful paper. I think my real problem with this (and most other papers) is threefold.
- In any knowledge sharing environment you are dealing with a complex system so the creation of survey instruments with dependent and independent variables is in appropriate. It gives a happy appearance of order and structure, but in reality there are too many factors (or better modulators in play). Five minutes reflection produced those listed above and it would not be difficult to create more.
- Academics seem wedded to the tacit/explicit advice/documentation dichotomies rather than realising that tools are a part of human intelligence.
- My generic issues with hypothesis based research and the difference between observed and reported behavior
So worth reading, I will cite it, but be careful in the claims.