I sort of knew this before but only really articulated it at the seminar today here in Wellington. The essence of interventions in a complex space is that you get the system to model itself you don’t (unlike systems thinking) attempt to model it. Resonates with Gell-Mann idea that the only valid model of a complex system is the system itself. Its one of the key differences between human complexity thinking on the one hand and systems thinking and aspects of computational complexity (agent based modeling) on the other. The dangers of partial or incomplete representation are too high if the system is complex and involves humans, so the focus is on whole system interventions.
Another great quote from Gell-Mann while I am at it: Just because things get a little dingy at the subatomic level doesn’t mean all bets are off. It supports my proposition that messy coherence is a pre-condition for human sense-making