There is a trick I pull in workshops that deal with models. I start by putting a toy elephant on the table in front of me and ask people what it is. Most people will say that it’s an elephant. So I ask what an elephant does; the answers come easy – it provides transportation, moves logs, performs tricks in circuses and so on. I then say, “does this elephant do this in any conceivable way?”. No … it’s a model of an elephant. The model does not provide a simplified form of reality (e.g. moves only tiny logs small distances), it merely represents reality. It’s a metaphor for an elephant – the model at best reminds us what we’ve heard elephants might do, maybe even seen them doing but it doesn’t actually reflect that behaviour.

It’s a cheap, flawed but effective trick since it reflects one of the greatest mistakes people make when constructing and using models. And that’s confusing models that are simplified versions of reality with models that are representations of reality. In very general terms, according to the modelling literature, models that are simplified versions of reality can be used for prediction (or explanation) whereas models that represent reality in general can be used for gaining insights but not for prediction.

Evaluators love models, and managers love (indeed often demand) the models evaluators construct. In particular evaluators and project managers love “logic models”, as Irene and I in our separate blogs have indicated. For me “logic models” represent the logic of an intervention and the possible or intended consequences of that intervention. They are not a simplified form of either the intervention or its consequences. Additionally they represent a particular framing of that intervention. There will always be additional framings that lead to different logical representations. The nature and consequences of yesterday’s Rugby World Cup match will be different whether we frame it as sport, as entertainment, as marketing of sponsor products or an expression of tribal or national identity.

Many evaluators have real problems sorting this out and believe the models they create are simplified versions of reality. The consequence, at least in my view, is that it locks evaluator thinking into a rather mechanistic deterministic frame. Now let me be clear, as I stated in an earlier blog, I do not have significant problems with mechanistic, deterministic thinking that treats everything as if it were simple. A food recipe really helps me bake a cake without a double major in physics and biochemistry. The aircraft cockpit checklist is essential to my bimonthly commute between New Zealand and the USA, and the operating room checklist is increasingly important should I need brain surgery.

So what has all this to do with Cynefin and Sensemaker? And indeed what has it got to do with our recent trial of Sensemaker?

The Cynefin framework usefully distinguishes between simple, complicated and complex aspects of a situation. Without going into unnecessary detail, the kinds of models evaluators use tend to work best at describing and gaining insight into those aspects of an intervention and its interaction with a situation that would be categorised by the Cynefin framework as falling into the complicated and simple zones. Typically they describe a sequence of cascading changes in a situation that are linked in some way. You have situation A, stuff happens and you have situation B.

The evaluation field has very good ways of identifying and determining the significance of the cascading changes to a situation, but often gets into considerable trouble with assessing the links between those changes; the “stuff happens” part.

Enter Cynefin, Sensemaker and the little experiment we conducted that I mentioned in my last blog. Recall we had one person – an expert in the field – take the narratives we constructed and provide a series of explanations of the “stuff happens” and their significance. The analysis was insightful, detailed and convincing. It was the kind of approach that people are familiar with in a format that is also familiar…. an expert’s eye view of events and opinions as described in the narratives.

Now consider the analysis of the narratives via Sensemaker analysis. Compared with the expert analysis, it was much tougher and less definitive analytical task and above all raised more questions about the emerging situation and the relationships between the cascading chain of changes in the situation. It was often difficult to explain how the position of the narratives on the triads and diads were reflected in the narratives themselves. Rightly or wrongly evaluators tend to raise validity concerns when these kind of things happen. In general evaluators are expected to come up with answers in terms of “worth” rather than further adding to the initial questions that underpinned the evaluation.

So what’s the possible role of Sensemaker under these conditions? Recall that in our work we constructed the triads and diads on the basis of various “stages” of the cascade. As it happens when we asked people to put these “stages” in order they came up with different orders – a nice example of a model being just a representation of one framing of reality; different framings and different assumptions about how the elements of the cascade fit together ended up with different patterns of cascade.

However, when looking at all the possible combinations, you can usually spot some “stages” of that cascade that are critical in all framings and sequences. Now these critical points often contain an entangled mess of simple, complicated and complex aspects. The simple and complicated aspects we can potentially understand what is going on – but the complex aspects may be beyond our comprehension. You may remember that our trial of Sensemaker sought to highlight this by running data rehearsals of what we thought might happen and what the Sensemaker patterns would be if they did. In the actual results some of the key patterns were as hoped for and for some quite radically different. With those that were radically different something unknown is going on that needs to be watched and analysed carefully; it is in the complex category rather than the complicated one. It would be unwise to rely on a single source of information – we may gain insights from the narratives, or we may gain insights from asking experts, we may even gain insights by going back to those supplying the narratives and seek further explanation.

I suspect that these critical areas, those that fail to behave as anticipated, are where the potential uses of Sensemaker will be strongest for evaluators, but as an addition to rather than a replacement for other approaches. Sensemaker may not be the only route through complexity.

And a final coda. Sensemaker is generally conceived as a way of gathering large amounts of narrative data and displaying the collective meaning of those narratives as conceived by the narrative providers. In purely complexity terms it enables the location of small emerging patterns within larger more stable patterns. Evaluators are interested in both the large stable patterns and the small emerging patterns. In our case we were using relatively small samples – the initial analysis was with just over 50 narratives. Thus our ability to spot emerging patterns is perhaps limited, but yet some of the more stable patterns were quite clear. In which case big samples are not necessary – it can help pose good questions at an early stage of an inquiry, long before we are expected to come up with good answers. The question is whether Sensemaker can spot stable patterns and provide explanations for them at least as well as, and preferably better than more established methods of inquiry.

This could be the last in this series of blogs and I don’t wish to tread on the next persons footsteps. However tomorrow I may be trying out some of these ideas with an agency that is seeking to resolve the tension between models as simplifications and models as representations, especially in situations that contain complex aspects. They may have some further insights into the issues I’ve covered – and if they do I’ll let you know.

< Prev

Algorithmic custodians and the lazily indulgent

Now it may not be immediately obvious, but this picture contains one of the ...


Further Posts

Next >

Some concluding thoughts from Manchester Airport

So what does all this add up to in terms of the use of Cynefin ...


Further Posts