I’ve been developing a theory of constraints both as a method for strategy mapping and as a way of better understanding domains and sub-domains in Cynefin. That thinking was in part stimulus for the liminal version of Cynefin. I’ve used a base distinction of Fixed/Governing constraints for order and Enabling constraints for complex. The notion of Enabling constraints has been extensively elaborated by Alicia (and you still have a chance to make the Cynefin Retreat were she is part of the faculty). That set is fine to explain basic Cynefin but I was looking for a greater range of meaning that would also handle different aspects, or degree in each domain (in particular complex). At the same time I’ve been working on the full workflow for a complex systems approach to strategy and that requires some form of indirect mapping to avoid premature convergence; avoiding any hypothesis is key to sense-making and a part of that is getting people to look at the situation tangentially or abstractly before they form conclusions.
The typology of constraints I posted last December has survived the test of multiple presentations and discussions and has the advantage of combining the distinction between resilience and robustness which is critical to any understanding of strategy. I’m assuming familiarity with that earlier post so if you haven’t read it use the link. A search of this site on constraints will also give you over a hundred other posts on the subject, but they need to read in context of the dates they were written. There are a lot of posts on resilience and robustness as well but to save time I’ll quickly summarise how I use them:
- A robust system is one that survives as is, or with only minor modifications (Shoring it up until Christmas might resonate with older British readers). It can be known, defined and provides a clear boundary state or type of linkage which is explicit in nature.
- A resilient system is one that survives with continuity of identity over time, but it survives by changing and that change may not be explicit or easily understood. Taleb’s anti-fragility fits here and I don’t buy his argument for difference. Self-healing systems, those that become more resilient under stress have been known for a long time.
Now having a double typology (containers or connectors; robust or resilient) will allow be to create more subtle definitions within the liminal version of Cynefin, possibly replacing the domain models (the three by three matrices for those familiar with the idea). I’ll build on that later in the week and also show what I think is the near final version of the liminal model which has changed a little since last week. For the moment I want to talk about a key starting point in situational assessment; the more accurately I can describe the current situation the better positioned I am to come up with a strategic focus which has some chance of success. The danger is that opinions about the future or the past fundamentally influence how people see the present and no exhortation for openness or objectivity will change that on a sustainable basis. As with most of my work I want to create systems that do not require high levels of sainthood to work. We need to work with what we have evolved to be, not what an idealistic facilitator would like us to be. That means working with cognitive bias rather than trying to pretend it doesn’t exist.
One key method for that is to start indirectly not directly, shifting to a level of abstraction in the complex-chaotic liminal domain of Cynefin. One way to do that is to start with a full mapping of the constraints in play, and a related mapping of the degree of controllability of said constraints. I’ve been using that approach in several sessions, a majority of which have been at C level, over the last few months and with some success. I start by giving the typology of constraints and then get groups to identify those in play at different levels. Ideally I’d do that with SenseMaker® on a distributed basis but it will work within workshops. Then we go through the following steps (this isn’t necessarily complete, a work in progress and I’m developing new control forms)
- Identify those which we can change
- Identify those which can be changed by other actors
- Complete a risk assessment on our and/or other actor changes
- Identify constraint changes that will minimise risk
- Commence parallel safe-to-fail experiments based on the above
Ideally all such changes should take place within a fractal relationship, appropriate to context. By experimenting with such constraint changes we allow an emergent approach to strategy, reducing the risk of major shifts. As the experiments start to stabilise the liminal complex-complicated aspect of Cynefin comes into place before we scale.
Apologies as this is still a little cryptic, I’m making and sharing notes rather than codifying a comprehensive set of methods – that is coming.
So why the knots? More on that in my next post, The images are from Magda Indigo