My last blog was sent from the Air New Zealand transfer lounge in LA. This one is sent from Geneva. This physical movement reflects my shift in focus from Cynefin to Sensemaker. But my general theme remains the same. Both Cynefin and Sensemaker appeal at the intellectual level, yet at the specific level of practice there are some tough questions to be asked about what additional insights they deliver when compared with existing methods of situational analysis, complex or otherwise. So far in this blog I’ve used Cynefin examples that people found useful, but in other cases the response to Cynefin ideas has been “interesting but so what, this has not told me anything I didn’t already know or discovered in a more familiar manner”. As an evaluator, interesting in issues of intrinsic merit and contextual worth, whenever someone poses the “so what” challenge my ears prick up. As a methodologist these challenges are important because investment in new methods and approaches is expensive in time, cost and intellectual effort. There has to be a good reason for me – for anyone – to learn something new; and one of those reasons is to enable me to do things I’ve never done before. So let me in a friendly and hopefully supportive way pose a few challenges to the Sensemaker community.
The documentation around the ideas that surrounds both Cynefin and Sensemaker is exciting and convincing. The energy by which proponents promote the benefits is undeniable. Yet the documentation around the specific use of these approaches is relatively scarce and the value of the material to the new practitioner rather questionable. There’s quite a lot of “we had a problem, we used Cynefin/Sensemaker, a miracle happened, and we are now enlightened”. But as a methodologist and evaluation practitioner this is does not provide sufficient evidence of either merit or worth. I need to see the handle cranking, I need to be have some facts as well as the hyperbole.
This might seen an unduly negative introduction to my reflections on working with the Sensemaker ideas. But my reality is that I work with a bunch of highly experienced applied social researchers, organizational consultants and policy makers, who have on hand all kinds of clever ways of wrestling with messy issues. They have seen analytic fads and fashions come and go over the years. And they don’t have access to budgets with multiple zeros, even though the projects they are involved in reach into the billions. So they tend to ask very tough pragmatic questions of new approaches. And the publicly available material surrounding Sensemaker and Cynefin frequently fails in their eyes.
So over the past year we’ve staged our entry into the Sensemaker universe pretty carefully. A small experiment here and lot of preparatory work there. Many questions to people on in the Cognitive Edge networks about the practicalities of moving towards a Sensemaker based piece of research. It steadily became clear to us that the Sensemaker process, as advertised, is a very different way of conceptualising and undertaking social data collection and analysis. We found we had to learn entirely new ways of conceptualising the kinds of questions you ask in a questionnaire. We went over and over again what the triads may mean. Each triad had a documented purpose with a documented use in decision making. The triads and diads were developed specifically from the stated (and implicit) goals of the process we were assessing. We piloted the whole thing – and tried to make sense of the results (and thanks to David, Mark, Irene and others for helping us in the process). We learned nuances and idiosyncrasies of the Sensemaker software that lay way beyond the public workshops.
We even conducted a data rehearsal where we invented different kinds of narrative patterns and assessed how we might understand and use them. Despite all this, some of my colleagues continued to question both the validity of what we were doing and the usefulness. Finally we took a deep breath and sent the material out into the world.
When the results came back we conducted one final experiment. We analysed the material using the Sensemaker software and we got one of our team members to make sense of the narratives directly; a traditional “expert” analysis with all the risks that Sensemaker seeks to overcome. The intent was to compare the usefulness of the two analyses in terms of informing future decision-making.
The results were interesting, and I’ll discuss the implications for the wider adoption of Sensemaker in my next blog.
There is strong tendency in the evaluation field right now to treat everything as if ...