
Sorting out complexity is, by nature, difficult. When we talk about complexity we tend to mean something beyond normal, regular, or average. Even agreeing on definitions is problematic.
A search of the “systemspedia” in the online library of the Institute for the Study of Coherence and Emergence found 362 entries related to complexity. (Note that this is different than results from a general online search engine, such as Google, which may have little if any actual relevance.) The first reference for the first result cites Fernando Sáez Vacas’ 1990 definition, which provides this explanation for complexity:
The name we are giving to the condition of human beings, objects, phenomena, process, concepts and feelings because:
- a) They are difficult to understand or explain;
- b) Their causes, effects or structure are unknown;
- c) They require either a great deal of information, time or energy to be described or managed or a huge coordinated effort on the part of the persons, equipment and machinery;
- d) They are subject to a variety of perceptions, interpretations, reactions and applications that are often contradictory or disconcerting;
- e) They produce effects that are simultaneously desirable and undesirable (or difficult to control);
- f) Their behaviour, depending on the case, may be unpredictable, relatively unpredictable, extremely variable or “counterintuitive.”
The primary reference to be used for this discussion is an article by A.J. Zellmer, T.F.H. Allen, and K. Kesseboehmer published in Ecological Complexity in 2006. In it, the authors echo what is described above regarding common assumptions about complexity: “The lay view of complexity identifies that complex systems have: many parts; many types of relationships between many types of part; emergence of new structure de novo; poor predictability; non-linear behavior to the point of chaos.”
One of the questions they pose rarely gets addressed in discussions about complexity. Essentially, is complexity a property of things in the world, or does it only describe a (possibly temporary) lack of human understanding about the things that we observe?
One of the more interesting points in the article is that Zellmer, Allen, and Kesseboehmer correct their own previous stance on the issue. As they explain,
- The authors here have asserted elsewhere… measurable characteristics of what makes something complex. We now recant that error. We contended that complex systems are deeply hierarchical, the deeper the hierarchy the more complex they are. Complexity in those terms displays many levels of constraint. The constraints, we said, are linked with explicit aggregation criteria. We continued by asserting that complexity invites links between large and small scale, and between fast and slow processes. Complex systems have explicit links between multiple types, or equivalency classes…. To deal with a complex system it must be assigned to an explicit type, and be given an unequivocal boundary. In all this, at a fundamental level, we were wrong. But now, in the light of our desire to define complexity as something normative, we see a striking reversal of what is simple as opposed to complex.
Not only is this a rare demonstration of humility by a group of researchers, it is also an indication about the difficulty of the topic, and the fact that our understanding of it is still evolving. The revised description of complexity that they offer is based initially on the work of Robert Rosen, a theoretical biologist. (As a footnote, Rosen’s Anticipatory Systems was the first book published in the International Federation for Systems Research book series, and was out of print for many years. It was republished in 2012, and is now available again.)
Amongst his many other contributions, in 1985, Rosen explained the modeling relation; the relationships between models and the natural phenomena that they attempt to describe. He used the terms natural system for the “real world” that we attempt to understand, and formal system for our models. Simply due to our human limitations we can never fully know a natural system. We have only what our senses and instruments can provide. The formal systems (models) that we build are even more limited, because we only include those aspects which we deem to be important to our interests at the time. Models, then, are inherently incomplete, but can be useful to the degree that they help us to understand specific phenomena.
As Zellmer, Allen, and Kesseboehmer apply these concepts, using an example:
- To assert that the brain is complex because in reality it has billions of neurons is to miss the point, and displays a misunderstanding of how science works. Data collection in significant detail is at the center of the scientific endeavor. However, science does not attempt a full catalogue because, first, that is impossible to achieve, and second, if it were possible, the data would be as unmanageable as the material externality. The point of models is to make simplifying assumptions explicit.
They then go on to explain,
- Instead of going for complex materiality, we assert that complexity is normative, something that is identified by an agreement. Complexity is the ultimate semantic argument. If one has a paradigm, then the system is simple; perhaps complicated…, but still simple rather than complex. If one does not have a paradigm for it, then the system is complex.
In the view of Zellmer, Allen, and Kesseboehmer, then, complexity is resolved through agreed ways of understanding. Paradigms provide us common ways of seeing the world (as imperfect as those might still be.) The process of scientific investigation, and the explanations (sometimes being formal models) which result, can be a part of developing that understanding. As Zellmer, Allen, and Kesseboehmer describe this:
- The beginning of our process is to have an experience, and the end product of one part of our protocol is a model. At first there may well not be recognition of what experience is or what it might represent. But then one gives the entity a name and our process is in motion. A name amounts to an assignment of the observable to an equivalence class. The class generalizes the particulars that have been seen, putting them in an intellectual context… Models invoke a linguistic coding, making the cycle of model building a linguistic, structure-based cycle.
They then make the bridge to the use of narrative:
- A narrative is not about the reality of a situation. Rather, the point of a story is to lay out in the open what the narrator suggests is important. Narratives are not about being objective, but are instead displays of subjectivity. Clearly in a narrative there is representation. There is also compression down to just what the narrator considers significant enough for it to be included in the story…. If modeling is representation, and analogy is compression, then a narrative is the outcome of the Rosen modeling relation… The beauty of a narrative is that it can rise above a model. While complexity is something that cannot be modeled, one can still tell a story about it directly. The ultimate device for addressing complexity is narrative.
By way of application, Zellmer, Allen, and Kesseboehmer provide two examples from ecology. One involves the restoration of salmon to the Columbia River, and the other the reintroduction of whooping cranes to the eastern regions of the U.S. In both cases there were multiple, competing models of the problems and differing views of solutions. The Columbia River project involved stakeholders such as ecologists, biologists, toxicologists, Native American groups, and wildlife managers. (Many others could have been included.) The point made by Zellmer, Allen, and Kesseboehmer, though, was most importantly that these individual models could not simply be rolled up into a larger, meta-model. Their solution was to allow all of the models, collectively, to contribute to a narrative which might lead to a common understanding and an agreeable solution.
There are valuable lessons to be learned from this work. As we move further away from situations which have, or even value, rigorous models, though, we often find people operating from untested assumptions and unrecognized paradigms. In the worlds of organizations and larger social systems, moving towards narratives of common futures gets increasingly difficult. Those with vested interests in gaining or maintaining power often seek to control larger narratives rather than participating in sharing them. As suggested by Zellmer, Allen, and Kesseboehmer, though, small, competing ideas will probably never add up to larger answers. Each department in an organization working for its own benefit (using its own model) does not create a healthy corporation. Adding up all of the different, vested interests involved in education, or healthcare, or any number of other challenges facing countries around the world, does not result in solutions. Moving towards grand narratives describing the futures that we actually want to create might provide the guidance we need.
Read other posts by Gary Metcalf
Keep up with our community: Facebook | Twitter | Saybrook’s Organizational Systems Program