How Complexity Theory is Actually, but Only Limitedly, Helpful for Guiding Action in a Complex World

Beware, straw men and generalizations ahead

 

Those at the leading edge of applying complexity sciences to behavior are in a performative contradiction. They often point out that the behavior of complex adaptive systems (or complex responsive processes) cannot be predicted ahead of time because there are too many small variables any one of which could be have an outsized affect on the behavior of the system moving into the future, and/or because a system/process cannot be completely modeled by an agent who is part of the system, and/or because to do the calculation to predict the system’s future behavior would take the same amount of time or energy as the entire system getting to the future itself. They then critique more ‘linear’ thinkers, here used often as a pejorative, who attempt to ‘control’ the outcomes of complex systems, describing their arrogance as the source of all large scale problems in the world currently.

 

The contradiction is that if it’s the case that the behavior of complex systems cannot be predicted, then they should have no idea whether applying complexity theory to individual or organizational behavior will have any better results at all. But if it can be predicted, if you can treat a complex system as an object upon which you can act, even from the inside, then you are back doing “linear” thinking, which you’re not allowed to do. As far as I can tell, there is no way out of this contradiction without admitting it.

 

Behold, steel men and appreciation ahead

 

The major practical insight for human-scale interactions is that it humbles us into admitting a much deeper level of uncertainty than we admitted before. While previous knowledge quests supposed a possible complete model of everything at the end of their golden brick roads, various complexity sciences have destroyed this hope from which they arose. Left in their wake is the recognition that uncertainty will always be with us and, even better, that our ‘mental models’ are poor reflections of the thing which is happening to us, and which we are creating, right now in our perception and through our actions. First the map could be the territory, then the map could never be the territory, then behold! the territory!

 

Truthful, honest, practical syntheses

 

Obviously there is something valuable in a deeply mathematical description of why uncertainty will always be with us. At the very least it adds to what we know is true about the world. But what we know is true about the world is itself a model. Complexity sciences do not destroy our models, they make them better. They make them better by pointing us back to our experience of what is happening right now, open to new possibilities, big leaps, collectively emergent decision-making rather than individual and authoritatively delivered decision-making, and the information we can gather from our senses. Yet they also do not make strategy, modeling, evaluation, or prediction unnecessary or impossible. They are still possible, they are still helpful, yet now we know there will likely always be the necessity for improvisation and there will likely always be unintended consequences of our actions. There will always be some uncertainty yes, but differing amounts of it, and never total.

 

Complexity theorists like to make a distinction between complex systems and linear systems. If such a distinction is true in the real world, it’s not the kind of distinction they think it is. First of all nothing is either of those. The universe as a whole and smaller scale interactions within it are all the kind of thing that will always elude formal description - whether as complex or otherwise. Within this, some complex adaptive systems / complex responsive processes will arise. Within these, some linear processes will arise. Each of our interactions may satisfy the definitions of all of these types of systems (and/or others) depending on the scale or depth to which we look.

 

The goal has not changed

 

If you have a goal, if you hope for something in particular which is not already present while you are hoping for it, even if it is as vague as “I hope that things will be better if I open myself to what’s arising in my experience right now through my senses without applying rational interpretation to it,” then you are still acting from within a ‘linear’ way of thinking, but that isn’t bad, it’s how we do things. It’s how all organisms do things. Agents have intention, and to have intention is to intend something before it happens. To not intend something and have something good happen is not agentic, it’s luck. We all hope luck is on our side, but we should probably never depend on it as long as we have real preferences in a real world.

 

The root of agency is the ability to accomplish your intentions. Let us use complexity theory to open us to more effective ways of accomplishing our intentions, not mystify us with ambiguity about our supposed inability to do so.