Across all disciplines, at all levels, and throughout the world, health care is becoming more complex. Just 30 years ago the typical general practitioner in the United Kingdom practiced from privately owned premises with a minimum of support staff, subscribed to a single journal, phoned up a specialist whenever he or she needed advice, and did around an hour's paperwork per week. The specialist worked in a hospital, focused explicitly on a particular system of the body, was undisputed leader of his or her “firm,” and generally left administration to the administrators. These individuals often worked long hours, but most of their problems could be described in biomedical terms and tackled using the knowledge and skills they had acquired at medical school.
You used to go to the doctor when you felt ill, to find out what was wrong with you and get some medicine that would make you better. These days you are as likely to be there because the doctor (or the nurse, the care coordinator, or even the computer) has sent for you. Your treatment will now be dictated by the evidence—but this may well be imprecise, equivocal, or conflicting. Your declared values and preferences may be used, formally or informally, in a shared management decision about your illness. The solution to your problem is unlikely to come in a bottle and may well involve a multidisciplinary team.
Not so long ago public health was the science of controlling infectious diseases by identifying the “cause” (an alien organism) and taking steps to remove or contain it. Today's epidemics have fuzzier boundaries (one is even known as “syndrome X”): they are the result of the interplay of genetic predisposition, environmental context, and lifestyle choices.
But the machine metaphor lets us down badly when no part of the equation is constant, independent, or predictable. The new science of complex adaptive systems may provide new metaphors that can help us to deal with these issues better. In this series of articles we shall explore new approaches to issues in clinical practice, organisational leadership, and education. In this introductory article, we lay out some basic principles for understanding complex systems.
Complex adaptive systems: some basic concepts
Definitions and examples
A complex adaptive system is a collection of individual agents with freedom to act in ways that are not always totally predictable, and whose actions are interconnected so that one agent's actions change the context for other agents. Examples include the immune system a colony of termites the financial market, and just about any collection of humans (for example, a family, a committee, or a primary healthcare team).
Fuzzy, rather than rigid, boundaries
In mechanical systems boundaries are fixed and well defined; for example, knowing what is and is not a part of a car is no problem. Complex systems typically have fuzzy boundaries. Membership can change, and agents can simultaneously be members of several systems. This can complicate problem solving and lead to unexpected actions in response to change. For example, Dr Simon (box) cannot understand why staff is so resistant to a small extension of surgery opening hours. Perhaps it is the fact that the apparently simple adjustment to working arrangements will play havoc with their own lunchtime involvements with other social systems—be these meeting a child from school, attending a meeting or study class, or making contact with others who themselves have fixed lunch hour
Agents' actions are based on internalised rules
In a complex adaptive system, agents respond to their environment by using internalised rule sets that drive action. In a biochemical system, the “rules” are a series of chemical reactions. At a human level, the rules can be expressed as instincts, constructs, and mental models. “Explore the patient's ideas, concerns, and expectations” is an example of an internalised rule that might drive a doctor's actions.
These internal rules need not be shared, explicit, or even logical when viewed by another agent. For example, another doctor might act according to the internalised rule “Patients come to the doctor for a scientific diagnosis.” In the example in the box Dr Simon's partners and staff probably do not share her implicit behaviour rule—“Try to accommodate patients' desire to be seen outside standard surgery hours.”
The mental models and rules within which independent agents operate are not fixed. The fourth article in this series—on complexity and education—will explore this point in more detail.
The agents and the system are adaptive
Because the agents within it can change, a complex system can adapt its behaviour over time. At a biochemical level, adaptive micro-organisms frequently develop antibiotic resistance. At the level of human behaviour, Mr Henderson (see box) seems to have learnt that the surgery is somewhere he can come for a friendly chat. As this example illustrates, adaptation within the system can be for better or for worse, depending on whose point of view is being considered.
Systems are embedded within other systems and co-evolve
The evolution of one system influences and is influenced by that of other systems. Dr Simon and Mr Henderson have together evolved a system of behaviour; they have both contributed to the pattern of frequent visits we now observe. The health centre is also embedded within a locality and the wider society, and these also play a part in Mr Henderson's behaviour. A subsequent article in this series will explore how medical care for people with diabetes is embedded in wider social and other systems. Our efforts to improve the formal system of medical care can be aided or thwarted by these other more informal “shadow systems.” Since each agent and each system is nested within other systems, all evolving together and interacting, we cannot fully understand any of the agents or systems without reference to the others.
Tension and paradox are natural phenomena, not necessarily to be resolved
The fact that complex systems interact with other complex systems leads to tension and paradox that can never be fully resolved. In complex social systems, the seemingly opposing forces of competition and cooperation often work together in positive ways—fierce competition within an industry can improve the collective performance of all participants.
Many will sympathise with Dr Simon's uneasiness about evidence based medicine. There is an insoluble paradox between the need for consistent and evidence based standards of care and the unique predicament, context, priorities, and choices of the individual patient. Whereas conventional reductionist scientific thinking assumes that we shall eventually figure it all out and resolve all the unresolved issues, complexity theory is comfortable with and even values such inherent tension between different parts of the system.
Interaction leads to continually emerging, novel behaviour
The behaviour of a complex system emerges from the interaction among the agents. The observable outcomes are more than merely the sum of the parts—the properties of hydrogen and oxygen atoms cannot be simply combined to account for the noise or shimmer of a babbling brook. The next article in this series considers the application of complexity thinking in healthcare organisations; it will describe how the productive interaction of individuals can lead to novel approaches to issues. The inability to account for surprise, creativity, and emergent phenomena is the major shortcoming of reductionist thinking.