Prof Derek K Hitchins
Systems science was born from a realization that the hard sciences — physics and chemistry in particular — did not address major parts of our everyday world. Physics, for example states that disorder increases in any closed system (the famous Third Law of Thermodynamics). However, life itself is an example of order increasing, and so is civilization. In a similar vein, we find that DNA, the so-called blueprint of life, is identical for a dead being as for a live one. Hence the idea behind Michael Crighton's Jurassic Park. How can it be that the very essence of life shows no chemical evidence of life?
Systems science sets out to be a more universal science, a "science of wholes," one that can look at the whole thing, the whole system, the whole world, even the Universe, with its counterintuitive behaviour, its chaos and its anomalies. In so doing, it embraces, not only the classic sciences, but the life sciences, too.
What is a System?
Systems science has to start with the idea of "system." A very good place to start. What is a system?
My favourite definition has to be:
" a dent in the fabric of entropy."
I also like the images created by:
"an oasis in the desert of disorder"
“a complex organized whole, a body of material or immaterial things” (Oxford Dictionary, 1946)
If, as physics tells us, disorder (entropy) is increasing everywhere, then a system is the existence of order, an oasis of order in a desert of disorder.. So the essence of "system" is order. It comes in all forms. Organization, architecture, society, economy, intellect, music, bodyform, and so on. And, since some systems are more organized than others, so some things can be more "system" than others; i.e., there are degrees of "systemness," and the degree can be recognized, measured and calculated — if that is your thing...
A more useful definition of a system might be:
"an open set of complementary, interacting parts with properties, capabilities and behaviours emerging both from the parts and from their interactions to synthesize a unified whole."
The definition emphasizes that the parts of a system interact, that therefore there is a dynamic aspect, and that emergence can be traced to this dynamic interaction.
- The properties of a system are those features of a system that can be observed and/or detected from a distance: size, mass, volume, density, failure rate, color, shape, etc.
- The capabilities of a system define the limits to its functional abilities. They might include: capacity, top speed, rate of processing, g-limitation, ability to operate from different energy sources, etc., etc.
- The behaviours of a system define its actions and responses to stimuli. They might include: sensitivity, reaction, reaction time, counterintuitive response, anger, 'knee-jerk,' aggression, etc., and might be predictable or unpredictable.
From that definition, we also have to think about systems as being either closed or open. If a system is closed, then nothing can enter and nothing can leave. In the limit, that would mean that we don't even know that it is there. So, the systems that we can get to know about are open.
Open systems exchange energy, material and information with their environment. They can receive inflows, and they pass outflows. You are an open system. You take in energy, food, air and water. You pass — well, you know what you pass.
What enters one system must have come from the environment or another system, or both. What leaves a system goes to another system, the environment, or both. So, we can envisage networks of interconnected open systems, each receiving, processing, absorbing, emitting and passing on.
Getting complicated? Well, yes, but your body is a neat example, with all its organs, its central nervous system, cardiovascular system, lymphatic system, etc. And the manner in which the body, with its many interacting subsystems works, is analogous to the way that societies, states and nations work, with their energy providers, waste disposers, manufacturers, consumers, transporters, etc.
Interestingly, your whole body may be still, stable, and seemingly doing nothing, yet the internal parts are in a fever of activity. Blood pumps around your arteries and veins, washing your organs, feeding your brain. Your immune system works overtime, seeking out and overcoming antigens and invaders. And so on. The study of open systems, then is of dynamics rather than statics, of interactions rather than just actions, of balance and flow rather than of control...
Emergence & Hierarchy
An Exploration of Emergence in Man-made Systems
Emergence is observed where properties, capabilities and behaviours of the whole may not be exclusively attributable to any one of the parts. A simple example of emergence would be cooperation between individuals such that, as a group, or team, they are able to achieve more than the sum of their separate, independent achievements. Similarly, none of the discrete modules in a modern radar system can detect, locate and track targets at a distance — these are capabilities of the whole, but not exclusively attributable to any one of the parts.
Being foci of order, systems are made up from interconnected parts, which are themselves systems. The interconnections between these so-called subsystems form patterns, or architecture; the patterns may be more or less ordered, so that the configuration entropy of a system may be calculated, giving a measure of the "degree of systemness." This is valuable, because it is an extensive/systemic parameter, i.e., it provides a measure of the whole system, not just particular aspects. It is also possible to reconfigure the parts so as to minimize the configuration entropy. In compliance with fundamental laws of physics, the lower the entropy, the more the internal energy within a system may be converted into useful external work...
A complex may consist of very many interconnected parts, so that variations in interconnection "density" may be envisaged. These are suggestive of systems within systems. A view of systems is that any system is made up of interconnected subsystems that are themselves systems, and that each subsystem is, in its turn, composed of sub-subsystems, and so on ad infinitum. This is a hierarchical view of systems, in which systems are "contained" within other systems. Note: hierarchy in this instance is a useful concept or viewpoint.
It is a tenet of this hierarchical viewpoint that each so-called “level” in hierarchy is determined by emergence; i.e., the complex of contained parts and interactions result in a set of emergent properties, capabilities and behaviours (EPCB) that entirely characterize the complex. The complex can therefore be completely described by its EPCB.
This viewpoint offers a powerful means of managing complexity. If a system can be described by its EPCB, then there may be no need to "look inside the container." What is inside might be an interacting team of people, a computer or robot, alternative types of technology, or whatever — it makes no difference provided the EPCB can be maintained.
Emergence was once viewed as somehow magical. A more enlightened view of emergence is that it derives from interactions between parts. This in turn implies that interactions, and hence dynamics, are fundamental to open systems. This is in stark contrast to the reductionist/engineer viewpoint; the process of reduction separates out the parts and "loses" their interconnections, so that dynamic, interactive aspects become lost.
Dynamics are evident in many real-world systems. The Solar System is clearly dynamic — if the planets did not rotate around their Sun, they would fall into it. The Solar System is also varying with time, as matter falls into the Sun, as orbits decay almost imperceptibly, and so on.
Weather systems are also dynamic on a global scale, under the influence of the Earth's rotation and the energy from the Sun. Weather systems are volatile, so that weather features such as cyclones/tornados, depressions, ridges, etc., are transitory. They nonetheless exhibit characteristic EPCBs, as every would-be weather forecaster is aware.
Such systems are nonlinear. In other words, if we were to describe them using differential or difference equations, then each equation would have one or more parameter in it, e.g., as follows:
dy/dt = 3 x2 + 4 - 2y +z.............................(1)
dx/dt = 2 x + 10 + 4y3- z ..........................(2)
dz/dt = y3 +z2 + 10....................................(3)
Such equations — and these are remarkably simple examples — cannot be solved simply and uniquely. To understand why consider a typical set of nonlinear dynamic equations, which we will present in English:
The rate of population growth for rabbits is determined by the numbers of breeding rabbits, the amount of food to feed both them and their offspring, the rate of rabbit predation by foxes, and the rate of human predation of foxes.
The rate of population growth of foxes is determined by the numbers of breeding foxes, the availability of rabbits as food within the area, and the rate of fox predation by humans
You can see the issue — the population growth of foxes depends both on the fox population and on having rabbits to prey upon — but, the more foxes, the less rabbits. Similarly, the less foxes, the more rabbits. The result is a set of nonlinear dynamic equations which, as many readers will know, present oscillating populations of rabbits and foxes, with the human predation sometimes resulting in counterintuitive population behaviour. And this is what is seen in real life, too.
So, there may be no unique solution to a set of nonlinear differential equations. Moreover, many of our usual methods of solving equations are invalid. F'rinstance, it would be invalid to add, or subtract, equations 1, 2, and 3, as the various parameters are time-variant. So, the x in equation 1 need not be the same x as in 2.
So, how can we proceed? The answer has to be nonlinear dynamic modeling. Happily, there are modern computer tools that enable us to do this easily, but there are still many of us who are unhappy with our inability to solve nonlinear, dynamic problems.
Systems Science and Systems Thinking
System science could be very mathematical — as General Systems Theory, it certainly started out that way. However, practitioners were less than keen on a mathematical approach, and instead an approach based on models proved more accessible and amenable. Systems thinking, using such models and tools for organizing and simulating dynamic behaviour, became available and popular in the 1990s, meeting some resistance from engineers and mathematical purists as they went.
However, the real world is becoming progressively more complex, as predicted by the Second Law, and at the same time it is becoming, if anything, more nonlinear dynamic as it becomes more interwoven, intertwined, complicated and complex. So, the facilities of systems science to manage, even to conceal, complexity and to model the dynamic behaviour of complex, nonlinear dynamic systems are developing is response to a growing need. As the cracks appear in the Cartesian reductionist approach, systems science and systems thinking are coming forward to fill the gap. Now all we have to do is to overcome several hundred years of reductionist prejudice...
Systems Science as the Science of Wholes
The usual way to understand how and why a system behaves in a particular way is to see how the various parts work/operate/behave, and from that to speculate how and why the whole behaves. That does not always work: consider a flock of birds, a shoal of fish, or an army of soldier ants, and you will see the problem: looking at the way one part of these extended organisms works fails to provide much insight into how, or why, the whole behaves as it does...
Recognizing this limitation, systems science seeks to explain how and why wholes behave as they do without recourse to looking at the parts. A recent theory concerns self-organized criticality.
Whereas uncertainty in fully chaotic systems grows exponentially, for other systems uncertainty grows as a power law—these are said to be "weakly chaotic." Bak and Chen's theory applies to composite systems containing millions of elements interacting over short range. Earthquakes, resulting from shifts in the earth's crust, fit the bill — that is, indeed, what Bak and Chen were investigating. So, too, may economies, wars, evolution, stock-exchange price fluctuations, distances between cars on a busy highway, meteorites entering the Earth's atmosphere, population growth in resource-limited environments, the Battle of Britain during 1940, and many, many more. Noise in resistors and other electronic components fits the pattern, too; there are many more small noise peaks than large ones. In electronics, this phenomenon is known as 1/f noise, indicating the inverse relationship of peak amplitude to frequency.
Quite unlike random events, such patterns indicate that current events are dependent on an accumulation of past events. The archetypal system displaying self-organized criticality concerns a pile of sand on to which is dropped more sand, grain at a time. The pile is formed on a small, circular, horizontal plate. At first, the pile just builds until a critical stage is reached. At this point, sand may slip. Some slips consist of a few grains. Occasionally, slips involve many grains. After each slip, the pile grows again. Sometimes the pile grows beyond the critical point—it goes super-critical—and a slip will occur to restore it towards, or even below, the critical point, towards which it will continue to tend. The graph shows how such behaviour expresses itself.
Self Organized Criticality: Graph of sandpile height variation with time-
The height varies continually, above and below, but always returns to
the same mean — the critical level. (STELLA Simulation)
This model is very interesting for several reasons:
- A log-log plot of the size of slip (measured, say, in grains of sand) against the frequency with which that size occurs, forms a power curve (as in the graph below, although that graph is concerns itself with Deaths in War...)
- The phenomenon is long-term stable—once a critical condition has been reached, all slips tend to restore towards the critical point.
The theory of self-organized criticality is a holistic theory; it is not dependent on the scaling-up of physical laws regarding the behaviour of two or three sand particles. Chaos and catastrophe theories are similarly holistic.
Lewis F. Richardson, a British meteorologist, collected data from 1820 to the second world war (Richardson, 1960). In particular, he observed a relationship between the number of wars in which a given number of people were killed and the frequency of such wars. He found, not surprisingly perhaps, that the more frequent wars incurred less casualties and vice versa.
The figure below shows my graph of the number of deaths per war against the frequency of such wars, both scales being logarithmic. Data from Richardson's work are shown as small circles against a power law curve fit (a straight line on a log-log graph). The fit is remarkable, with a high negative correlation coefficient. Attempts to fit the data to an exponential curve were much less successful.
Conflict and Self-organized Criticality
What can we deduce from this closeness of fit? If Bak and Chen's theory of self-organized criticality holds, the implication is that conflict is an instance of weak chaos. Perhaps, like earthquakes, thermal noise, or the pile of sand, there is continual friction between groups of humans which releases occasionally in conflict of uncertain size.
Systems Science as the Science of Complexity
Systems Science is also referred to as the science of complexity — by which people appear to mean that it is concerned exclusively with complexity and complex systems. Systems science is certainly concerned with chaotic behaviour, and with weakly chaotic behaviour as above, but not necessarily exclusively. Systems can be complex without being chaotic: similarly, simple systems can behave chaotically... However, as the previous section suggested, non-linear behaviour is associated with complex systems. If systems science is about anything, then, it is about understanding complex systems behaviour. Systems engineering may then be viewed as applied systems science, i.e., as the conception, design and creation of complex systems as solutions to some problem... and complex system behaviour can stem from a variety of sources, including the close coupling of simple, linear subsystems...
Systems Engineering and Chaos
Chaos is a fascinating topic, largely avoided by engineers and systems engineers alike. However, it turns out that the real world around us is predominantly chaotic, so perhaps avoiding this ubiquitous “elephant in the room” may not be the best idea… the following video broaches the delicate subject, uh, delicately!
Systems science also becomes intimately concerned with Emergence: properties of the whole that are not exclusively attributable to any of its parts. Emergence is associated with non-linear systems and systems interactions, and so with complex systems. Complex systems, rather than be considered as necessarily chaotic, might be better considered as nonlinear systems.
The human body is an exemplary nonlinear system, in which the subsystems (pulmonary, cardiovascular, central nervous, immune, etc.), are all mutually interdependent. So, each subsystem depends for its existence on its interactions with the other subsystems: no subsystem can exist on its own, in isolation. Perturbations in any subsystem will be reflected in the others.
Looking around,there are very many such systems: most businesses/enterprises, societal systems, supply systems, ecologies, economies, etc., etc., fit the bill. All are non-linear, and have to be viewed as a whole, rather than part-by-part. System design, where it is feasible, is concerned largely with bringing suitable subsystems together and orchestrating their interactions to create requisite emergent properties, capabilities and behaviours. The various subsystems must complement each other, and must contribute, too, to the purpose and objectives of the whole... Holism and synthesis, it seems, are at the heart of systems design and systems engineering…
Derek Hitchins February 2016
References: Bak, P. & Chen, K, (1991) Self-Organized Criticality, Scientific American, 264(1).
Richardson, L. F. (1960) The Statistics of Deadly Quarrels, Boxwood Press, Pittsburgh, PA