8: Issues of evidence and judgement & a false dichotomy
The phrase evidence based decision making is commonly touted and commonly misconstrued in government and industry alike. In part this comes from an incomplete understanding of causality which seems to stem from a desire for certainty of outcome. The concept of evidence here seems to mean the reliability of a statement If X then Y applied to evidence of past practice. The practice may be general or it may come from specific trials where a hypothesis of some type has been tested. So when someone asks if there is evidence for a particular approach they seem in the main to be asking: Has this worked before? In fact they should be asking a different question namely: Is this the right direction to take in the future? Many a problem has come from assuming that a positive answer to the first leads to a positive answer to the second.
In industry this produces the tendency to recipes based on past case studies something that I addressed in the second of these posts, referencing the confusion of correlation with causation. The difficulty of applying such an approach under conditions of extreme uncertainty is that a case based approach is a priori wrong and this is self-evident to even the most consultancy entrained leader. I say extreme advisedly as there is a strong tendency in most organisations to try and pretend that ordinary day to day uncertainty results from inadequate expertise or investigation rather than being an innate aspect of the dispositional state of the system at the time. This results in the myth of the great leader, an all too common form of fatalism which falls back on the great leader making the right call when there is no evidence, or sometimes in the face of the evidence. So we end up with a dichotomy between past evidence and future inspiration. Now that can work and there are cases of great leaders making the right calls, most of which are regurgitated ad nausiam in the popular management literature to support various collections of platitudes masquerading as leadership models.
But it doesn’t have to be this way. The assumption that past practice can lead to future deterministic solutions is valid within an ordered system with common context and stable, known or knowable constraints. The need for inspired leadership making quick decisive decisions is uncontested in a chaotic environment where all the old certainties have been lost and we need some pathway out of crisis. Yes the leader should focus on constraints not solutions (something we teach on Day two of the full four day programme) but it does need a call. So if we accept a dichotomy between order and chaos, something that is also common, then the evidence-leadership dichotomy applies.
However, most of the time we are dealing with complex not ordered or chaotic solutions. The most common characteristic of a complex system is that the evidence available is always incomplete and will support competing and inconsistent (there is a difference) hypotheses. In such cases we cannot make a call for a single solution, or should be try to get one approach to work in linear iterations. Instead any coherence hypothesis should be resourced for a safe-to-fail experiment which in turn results in a portfolio of such experiments from which a coherence direction of action will emerge and will be evident (if not self-evident) to the various actors in play. Now a lot of those who are using complexity language to justify old methods (my sixth post) don’t get this, using complexity to justify either the crypto-anarchist extreme of unconstrained self-organisation or the new age fluffy bunny morass of highly facilitated engorgement of platitudinous conversations based on goals that make people feel good but change little other than to reinforce self-righteousness. Complexity management is a disciplined process not an indulgence.
Trials then are an example of a one time call when parallelism is needed. The problem with a trial is that you are paying attention to something which is novel in a restricted community. Such trials often work, in fact they mostly work because of the investment of time, attention and sheer novelty that always gains some response. But the context of a trial is not an operational roll out. There is also a snake oil variation of this which manifests frequently and which I have most recently seen in SAFe. Here a new technique is used in a moribund bureaucratic organisation (its a lot easier to experiment in such groups as I know from my IBM days). Low and behold we get a positive result reported. The consultant who brought the technique celebrates with success and the sponsoring executive joins them on the conference circuit and shortly afterwards because a consultant to sell the recipe. Now most people who do this are really well intentioned but terribly naive, as are those who buy their story. I’m also pretty cynical about Executives who claim a success but then leave rather than demonstrating their ability to repeat that success, or gain the benefit of the success they claim by promotion. Cynicism aside, a single intervention using a novel approach with a lot of attention is never going to scale and it does not constitute evidence.
I have seen the same problem in abundance in government which seems to think that all communities share a common context. From that they assume that a trial in one community will immediately produce a similar success when scaled to multiple other communities with by definition less attention and less novelty.
Trials that are reported to have worked, in the main, have suffered insufficient tribulation to be resilient.
The picture by the way is by William Blake “Job’s Evil Dreams”