Safe-fail probes

November 16, 2007

One of the main (if not the main) strategies for dealing with a complex system is to create a range of safe-fail experiments or probes that will allow the nature of emergent possibilities to become more visible. In an ordered system safe-fail design is not only possible but is probably mandatory, the only issue is who does the design with what validation processes. A complex system has no repeating relationships between cause and effect, is highly sensitive to small interventions and cannot be determined by outcome based targets, hence the need for experimentation. Note: if none of this makes sense then read the HBR article or at no cost this summary from Tom Stewart or a more elaborate version here from myself.

Now the issue which arises is how to construct such interventions and are there any rules or principles that would help. I have a reasonably well established approach here, and recently Raymond over at the narrative lab came up with 9 principles with which I have some but not complete agreement.

Firstly to my normal mode of operation. Taking the Cynefin framework in a workshop environment (ideally around half a day but an hour is enough if that is all you have) I get the group to break a strategic issue into the four main domains of the Cynefin framework, in effect treating it as a categorisation framework. More elaborate processes involve full sense-making with the framework emerging from the data, but lets keep it simple for now.

One of the rules for this is to break the issue or problem up into discrete issues which are clearly in one domain or another. This helps the process a bit as ordered systems are capable of being reduced to parts so if there is a problem in reduction its a good indicator that the problem is complex. It is rare here to find things in the chaotic domain but it has been known. Either way, that task complete I break the group up and let them work in parallel (if time) or on separate areas (if little time) with the following decision rules.

  • Simple: who should do what by when and how will success be measured. What are the costs, can they be justified?
  • Complicated: which group of experts or what exploratory process needs to take place to work out the correct answer or approach to the situation. What will this cost, over what time period and can it be justified? If it can't, i.e. there is an answer but it is going to take too long to find it, shift the problem to the complex domain.
  • Chaotic: what actions do we take now to stabilize the situation. Are we going to impose order (shift the problem to simple) or create enough stability so we have time for experiments (shift the problem to complex).

If the groups are working in parallel then we compare and synthesis results. If the problem has been broken up we do a quick validation and then proceed to the interesting bit. How to handle the residual complex issues. This is often the largest space, this is especially so if you are dealing with conflict resolution for which the process is well suited.

Now at this point there will be lots of views, often conflicting as to how to deal with the issue. This is where conflict is most frequent because the nature of the domain means there is contradictory evidence and disputes are likely to escalate with participants becoming locked into a particular perspective as they defend it. So here are the stages:

  • Before opinions harden you create a very simple decision rule. Everyone with an idea that has even the remotest possibility of being true or useful creates a safe fail experiment based on the idea. Critically this does not have to be one that would prove the issue, just consistent with the position adopted.
  • Next each proposal is fleshed out, costed and subject to challenge and review, but nothing is ruled out unless rationing of resource is required. This is rarely the case by the way as you keep the experiments small, designed for fast feedback/evolution.
  • For each experiment to be valid its outcome must be observable, not to measure necessarily but to allow the simple rule of amplification or dampening of good or bad patterns to be put into operation. There is no point in an experiment where you can not observe what is happening.
  • The experiments are then reviewed for common elements and resourced along with set up of monitoring and review processes.

All neat and tidy and it works well. So let's have a look at Raymond's 9 rules and provide some commentary. I am not doing this to be critical by the way, this is good stuff and we need more exchange in the network to create more robust methods. Raymond's words in italics, my commentary following.

Don't be afraid to experiment – some will fail.
I would go further than this and say that experiments should be designed with failure in mind. We often learn more from failure than success anyway, and this is research as well as intervention. We want to create an environment in which success is not privileged over failure in the early stages of dealing with complex issues.

Every experiment will be different – don't use the cookie-cutter approach when designing interventions.
Yes and no. You might want the same experiment run by different people or in different areas. Cookie-cutter approaches tend to be larger scale that true safe-fil probes so this may or may not be appropriate.

Don't learn the same lesson twice – or maybe I should say, don't make the same mistake twice.
Disagree, you can never be sure of the influence of context. Often an experiment which failed in the past may now succeed. Your competitor may well learn how to make your failures work. Obviously you don;t want to be stupid here, but many a good initiative has been destroyed by the We have tried that argument

Start with a low-risk area when you begin to experiment with a system.
Again yes and no. If you are talking about the whole system yes, but normally complex issues are immediate and failure is high risk. The experiment is low risk (the nature of safe-fail is such that you can afford to fail) but the problem area may well be high risk. In my experience complexity based strategies work much better in these situations.

Design an experiment that can be measured. That is, know what the success and failure indicators of each experiment are.
Change measure to monitor and I can agree with it. The second sentence I would delete

Don't be afraid – did I mention that already?
cool fully endorse

Try doing multiple experiments on the same system – even at the same time. Some will work, some will fail – good. Shut down the ones that fail and create variations on the ones that work.
Introduce dissent. Maximize diversity in the experiment design process by getting as many inputs as possible.

In the main agree, but see the above process. I generally don't like the failure and success words as they seem inappropriate to probes

Learn from the results of other people's experiments.
Yep, but remember your context is different

Teach other people the results of your experiments.
Yep, but remember your context is different

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

About the Cynefin Company

The Cynefin Company (formerly known as Cognitive Edge) was founded in 2005 by Dave Snowden. We believe in praxis and focus on building methods, tools and capability that apply the wisdom from Complex Adaptive Systems theory and other scientific disciplines in social systems. We are the world leader in developing management approaches (in society, government and industry) that empower organisations to absorb uncertainty, detect weak signals to enable sense-making in complex systems, act on the rich data, create resilience and, ultimately, thrive in a complex world.
ABOUT USSUBSCRIBE TO NEWSLETTER

Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.

© COPYRIGHT 2024

< Prev

Grav (1951-2007)

I am writing this blog with tears in my eyes to the sound of the ...

More posts

Next >

A bit of a stir in the KM community?

James Dellow hopes that this paper will will cause a bit of stir in the ...

More posts

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram