The blog on Anonymity generated some really great responses. Thanks to all who contributed. In particular, Michael Cheveldave sparked my thinking about the relationship between trust and complexity. In the last book I wrote, Managing Interactively, I talked about the relationship between trust and speed in organizations. My informal observations over the years have shown me trust levels have a significant impact on the strength or weakness of connections between agents in complex organizations.
In my observations of complexity-based approaches to group interactions, trust affects levels of interpersonal disclosure as well as strength and maintenance of connections between participants before, during, and after the meeting.
Now, back to Michael’s comment: “…perhaps disclosure of identity acts as a constraint or boundary which is in contrast to anonymity which, as stated earlier, might contribute to the stimulation of an attractor.” Can the manipulation of factors that affect trust levels serve as constraints or stimulate attractors? If so, what would those factors be and how might they be manipulated?
Let’s take identity disclosure as an example. If we held a complex facilitation exercise online with everyone masking their true identity, what do you think would be the difference between that exercise and a face-to-face exercise? Michael, I think you’re right that the virtual might be a reflection of the real in complex workshop settings, but often those virtual exercises are designed or selected to be similar to existing environments in principle but dissimilar in appearance.
To date, I have focused exclusively on promoting high levels of trust in organizations. Now I’m rethinking the role of trust in interactions between agents in a complex context. Could it be that high trust levels could be detrimental to decision making in complex contexts? I found an article that explores some intriguing trust issues between humans and complex computerized systems.