Hypotheses on Systems and Complexity

A famous John Gall quote from Systemantics became known as Gall's Law. The law states:

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.

I've always felt the truth of this idea. Gall's Law inspired me to think about the evolution of complexity in systems from different perspectives. I've developed five hypotheses in this area:

  1. A simple system that works (and is maintained) will inevitably grow into a complex system.
  2. The tendency of the Universal System is a continual increase in complexity.
  3. A simple system must increase in complexity or it is doomed to obsoletion and/or failure.
  4. A system's complexity level starts at the complexity of the local system/environment in which it participates.
  5. A working system will eventually collapse due to unmanageable complexity.

I call these ideas "hypotheses" because they are born of late-night thoughts while watching my newborn child. They have not been put through sufficient research or testing for me to call them "axioms", "laws", or "rules of thumb". These ideas may already exist in the systems cannon, but I have not yet encountered them.

The Hypotheses in Detail

Let's look at each of these hypotheses in turn, then we can discuss their implications for our projects.

Hypothesis 1: Simple Systems Become Complex

My first hypothesis is fully stated as follows:

A simple system that works (and is maintained) will inevitably grow into a complex system.

This is a restatement of Gall's Law from a different perspective. I believe that a working simple system is destined to become more complex.

This hypothesis is opposed to another systems maxim (quoted from Of Men and Laws):

A working system (and by happy accident, systems sometimes work) should be left alone.

Unfortunately, this recommendation is untenable for two reasons:

  1. Human beings are not disciplined enough to leave a working system alone.
  2. If a working system is not maintained, it will inevitably become obsolete according to Hypothesis 3.

Humans are the ultimate tinkerers. We are never satisfied with the status quo. We have the tendency to expand or modify a system's features and behaviors once we consider it to be "working" (and even if it's not working). Our working systems are destined to increase in complexity thanks to our endless hunger.

Hypothesis 2: Universal complexity is always increasing

My second hypothesis is fully stated as follows:

The tendency of the Universal System is a continual increase in complexity.

At its core, I believe that Hypothesis 2 is simply a restatement of the Second Law of Thermodynamics, but I include it for use with other hypotheses below.

The Second Law of Thermodynamics states that the total entropy of an isolated system can never decrease over time. Thanks to the Second Law of Thermodynamics, all processes in the universe trigger an irreversible increase in the total entropy of a system and its surroundings.

Rudolf Clausius provides us with another perspective on the Second Law of Thermodynamics:

[...] we may express in the following manner the fundamental laws of the universe which correspond to the two fundamental theorems of the mechanical theory of heat.

  1. The energy of the universe is constant.
  2. The entropy of the universe tends to a maximum.

I have an inkling that complexity and entropy are closely related concepts, if not actually the same. As such, I assume that the complexity of the Universal System will increase over time.

The reason that I think complexity increases over time is that I can observe this hypothesis in other sciences and directly in the world around me:

  • After the big bang, simple hydrogen coalesced into stars (and planets and solar systems and galaxies), forming increasingly complex elements as time progressed
  • Life progressed from simple single-celled organisms to complex networked species consisting of hundreds of sub-systems
  • Giving birth progressed from a natural, body-driven affair to one of complex rituals that is carried out by a large team of experts at great cost in specialized locations (i.e., hospitals)
  • Finance has progressed from exchanging metal coins and shells to a complex, automated, digitized, international system of rules and cooperating systems

Corollary: Complexity must be preserved

The idea exists that complexity can be reduced:

An evolving system increases its complexity unless work is done to reduce it.
-- Meir Lehman

Or:

Ongoing development is the main source of program growth, but programs are also entropic. As they age, they tend to become more cluttered. They get larger and more complicated unless pressure is applied to make them simpler.
-- Jerry Fitzpatrick

Because of the Second Law of Thermodynamics, we cannot reverse complexity. We are stuck with the existing environment, requirements, behaviors, expectations, customers, resources, etc.

Energy must be invested to perform any "simplification" work, which means that there is a complexity-entropy increase in some part of the system. Perhaps you successfully "simplified" your product's hardware design so that it's easier to assemble in the factory. What other sub-systems saw increased complexity as a result: supply chain, tooling design, engineering effort, mechanical design, repairability?

Complexity must be preserved - we only move it around within the system.

Hypothesis 3: Simple Systems Must Evolve

Hypotheses 1 and 2 combine into a third hypothesis:

A simple system must increase in complexity or it is doomed to obsoletion and/or failure.

The systems we create are not isolated; they are always interconnected with other systems. And as one of John Gall's "Fundamental Postulates of General Systemantics" states, "Everything is part of a larger system."

The Universal System is always increasing in complexity-entropy, as are all subsystems by extension. Because of the ceaseless march toward increased complexity, systems are forced to adapt to changes in the complexity of the surrounding systems and environment. Any system which does not evolve will eventually be unable to cope with the new level of complexity and will implode.

The idea of "code rot" demonstrates this idea:

Software rot, also known as code rot, bit rot, software erosion, software decay or software entropy is either a slow deterioration of software performance over time or its diminishing responsiveness that will eventually lead to software becoming faulty, unusable, or otherwise called "legacy" and in need of upgrade. This is not a physical phenomenon: the software does not actually decay, but rather suffers from a lack of being responsive and updated with respect to the changing environment in which it resides.

I've seen it happen enough on my own personal projects. You can take a working software project without errors, put it into storage, pull it out years later, and it will no longer compile and run. This could be for any number of reasons: the language changed, the compiler is no longer available, libraries or tooling needed to build and use the software is no longer available, the underlying processor architectures have changed, etc.

Our "simple" systems will never truly remain so. They must be continually updated to remain relevant.

Hypothesis 4: "Simple" is Determined by Local Complexity

Hypothesis 2 drives the fourth hypothesis:

A system's complexity level starts at the complexity of the local system/environment in which it participates.

Stated in another way:

A system cannot have lower complexity than the local system in which it will participate.

Hypothesis 2 indicates that a local (and universal) lower bound for simplicity exists. Stated another way, your system has to play by the rules of other systems it interacts with. The more external systems your system must interact with, the more complex the starting point.

We can see this by looking at the world around us. Consider an example of payment processing. You can't start over with a "simple" payment application: the global system is to complex and has too many specific requirements. There are banking regulations, credit card regulations, security protocols, communication protocols, authentication protocols, etc. Your payment processor must work with the existing banking ecosystem.

Now, you could ignore these requirements and create a new payment system altogether (e.g., Bitcoin), but you are not actually participating in the same local system (international banking). Even still, the Universal System's complexity is higher than your system's local complexity, and players know the game. You can skip the authentication requirements or other onerous burdens, but external actors can still take advantage of your system (e.g., Bitcoin thefts, price manipulation, lost keys leading to un-claimable money).

Once complexity has developed, we are stuck with it. We can never return to simplicity. I can imagine a time when the Universal System's complexity level will be so high that humans will no longer have the capacity to create or manage any systems.

Hypothesis 5: Working Systems Eventually Collapse

Hypothesis 5 is fully stated as follows:

A working system will eventually collapse due to unmanageable complexity.

Complexity is always increasing, and there is nothing we can do to stop it. There are two complexity-related failure modes for our system:

  1. Our system becomes so complex that we can no longer maintain it (there are no humans who can understand and master the system)
  2. Our system cannot adapt fast enough to keep up with the local/universal system's increases in complexity

While we cannot forever prevent the collapse of our system, we can impact the timeframe through system design and complexity management efforts. We can strive to reduce the rate of complexity increase to a minimal amount. However, as the complexity of the system increases, the effort required to sustain the system also increases. As time goes on, our systems require more energy to be spent on documentation, hiring, training, refactoring, and maintenance.

We can see systems all around us which become too complex to truly understand (e.g., the stock market). Unfortunately, Western governments seem to be reaching a complexity breaking point, as they have become so complex they can't enact policy. To quote Matt Levine's Money Stuff newsletter:

What if your model is that democratic political governance has just stopped working—not because you disagree with the particular policies that particular elected governments are carrying out, but because you have started to notice that elected governments in large developed nations are increasingly unable to carry out any policies at all?

Perhaps unmanageable complexity doomed the collapsed civilizations that preceded us. Given that thought, what is the human race's limit on complexity management? We've certainly extended our ability to handle complexity through the development of computers and algorithms, but there will come a time when the complexity is too much for us to handle.

Harnessing these ideas

These five hypotheses are one master hypothesis broken into different facets which we can analyze. The overall hypothesis is:

The Second Law of Thermodynamics tells us that our systems are predestined to increase in complexity until they fail, become too complex to manage, or are made obsolete. We can manage the rate of increase of complexity, but never reverse it.

The hypotheses described herein do not contradict the idea that our systems should be kept as simple as possible. Simplicity is still an essential goal. However, we must realize that the increase in complexity is inevitable and irreversible. We must actively work to prevent complexity from increasing faster than we can manage it.

Here are some key implications of these ideas for system builders:

  • If your system isn’t continually evolving and increasing in complexity, it will collapse
  • You can extend the lifetime of your system by investing energy to manage system complexity
  • You can extend the lifetime of your system by continually introducing and developing new acolytes who understand and can maintain your system
    • This enables collective management of complexity and transfer of knowledge about the system
  • You can extend the lifetime of your system by giving others the keys to understanding your system (documentation, training)
    • This enables others to come to terms with the complexity of your system
  • You can never return to "simplicity" - don't consider a "total rewrite" effort unless you are prepared to scrap the entire system and begin again
  • These hypotheses speak to why documentation becomes such a large burden
    • Documentation becomes part of the overall system's complexity, requiring a continual increase in resources devoted to managing it

Developing a skillset in Complexity Management is essential for system designers and maintainers.

Further Reading

Related Articles

Related Books

Beyond Continual Improvement

Anyone building a product, leading a team, or running an organization needs to listen to a talk by Dr. Ackoff titled "Beyond Continual Improvement". Dr. Ackoff touches on continual improvement, concepts of quality, and implications of systems thinking that are commonly overlooked. As long as we ignore the points described in this lecture, our efforts at improving our systems and organizations are doomed to failure.

All you need to listen to this lecture is fifteen minutes - a small price to pay in order to receive some of Dr. Ackoff's wisdom. I've also shared my personal notes from listening to the lecture below.

My Notes

Here are the notes I collected from listening to the lecture

  • Continual improvement is considered A Good Thing, but quality and improvement programs are often considered failures by the managers who introduced them
  • Definition of quality: meeting or exceeding the expectations of the customer or consumer
    • A definition we should all adopt!
    • If customer expectations are not met, the program is a failure, no matter what the expert thinks
  • Primary reason for the failures [his hypothesis]: they have not been embedded in systems thinking
  • What is a system?
    • System: a whole with independent parts that can have an effect on the overall system behavior or properties
    • The parts are interdependent - one part of the system needs another part in order to produce an effect
    • No part of the system can independently affect the system
  • Systems implications that people overlook:
    • The defining properties of a system are properties of the whole which none of its parts have
    • When a system is taken apart it loses its essential properties
    • A system is not the sum of the behavior of its parts, it is a product of their interactions
    • The performance of a system depends on how the parts fit, not how they act taken separately
    • When you get rid of something you don’t want (remove a defect), you are not guaranteed to have it replaced with what you do what
  • Conclusions that Dr. Ackoff draws from these implications:
    • If you are running an improvement program which is looking at improvements of the parts taken separately, you can be certain that the performance of the whole will not be improved
    • Finding and removing defects is not a way to improve the overall quality or performance of a system
    • Determining what you want means you need to take the systems and redesign it, not for the future, but for right now
    • To do that, answer the question: What would you do right now if you could do whatever you wanted to?
    • If you don’t know what you would do if you could do whatever you wanted, how can you know what you can do under constraints?
    • People never ask themselves this question
  • Basic principle: an improvement program must be directed at what you want, not at what you don’t want
  • Architects understand systems thinking
    • Client comes in, gives them a list of all the properties of the house (2 car garage, made out of redwood, one story, huge kitchen, etc.)
    • The architect creates an overall design for the house, then he creates designs for the rooms to fit into the house - whole before parts
  • Continuous improvement isn’t nearly as important as discontinuous improvement
    • Creativity is a discontinuity
    • A creative act breaks with the chain that has come before it.
  • You never become a leader by continuous improvement
    • You only become a leader by leapfrogging those that are ahead of you
  • We have to have the right idea of quality
    • There’s a difference between doing things right and doing the right things
    • Quality ought to contain the notion of value, not merely efficiency
  • Closing line:
    • Until managers take into account the systemic nature of their organizations, most of their efforts to improve their performance are doomed to failure

Further Reading

Related Articles

Converting between timespec & std::chrono

I was working with some POSIX APIs recently and needed to supply a timespec value. I primarily work with std::chrono types in C++ and was surprised that there were no (obvious) existing conversion methods. Below are a few utility functions that I came up with to handle common conversions.

Table of Contents

  1. timespec Refresher
  2. Conversion Functions
  3. Bonus: timeval conversions
  4. Further Reading

timespec Refresher

As a quick refresher, timespec is a type defined in the ctime header (aka time.h). The timespec type can be used to store either a time interval or absolute time. The type is a struct with two fields:

struct timespec {
   time_t   tv_sec;
   long     tv_nsec;
}

The tv_sec field represents either a general number of seconds, or seconds elapsed since 1970, and tv_nsec represents the count of nanoseconds.

Conversion Functions

A timespec can represent either an absolute time or time interval. With std::chrono, these are two separate concepts: std::chrono::duration represents an interval, while std::chrono::time_point represents an absolute time.

We need for four functions to convert between the two C++ time concepts and timespec:

  1. timespec to std::chrono::duration
  2. std::chrono::duration to timespec
  3. timespec to std::chrono::timepoint
  4. std::chrono::time_point to timespec

timespec to std::chrono::duration

Converting from a timespec to a std::chrono::duration (nanoseconds below) is straightforward: we convert tv_sec to std::chrono::seconds and tv_nsec to std::chrono::nanoseconds, and then cast the result to our target return type, std::chrono::nanoseconds.

using std::chrono; // for example brevity

constexpr nanoseconds timespecToDuration(timespec ts)
{
    auto duration = seconds{ts.tv_sec} 
        + nanoseconds{ts.tv_nsec};

    return duration_cast<nanoseconds>(duration);
}

std::chrono::duration to timespec

Converting from std::chrono::duration to timespec is a two step process. First we capture the portion of the duration which can be represented by a round number of seconds. We subtract this count from the total duration to get the remaining nanosecond count.

Once we have the two components, we can create our timespec value.

using std::chrono; // for example brevity

constexpr timespec durationToTimespec(nanoseconds dur)
{
    auto secs = duration_cast<seconds>(dur);
    dur -= secs;

    return timespec{secs.count(), dur.count()};
}

timespec to std::chrono::timepoint

For the std::chrono::time_point examples, I've used the system_clock as the reference clock.

To convert a timespec value to std::chrono::time_point, we first use our timespecToDuration() function to get a std::chrono::duration. We then use a duration_cast to convert std::chrono::duration to our reference clock duration (system_clock::duration).

We can then create a std::chrono::time_point value from our std::chrono::system_clock::duration.

using std::chrono; // for example brevity

constexpr time_point<system_clock, nanoseconds>
    timespecToTimePoint(timespec ts)
{
    return time_point<system_clock, nanoseconds>{
        duration_cast<system_clock::duration>(timespecToDuration(ts))};
}

std::chrono::time_point to timespec

To convert from a std::chrono::time_point to timespec, we take a similar approach to the std::chrono::duration conversion.

First we capture the portion of the duration which can be represented by a round number of seconds. We subtract this count from the total duration to get the remaining nanosecond count.

Once we have the two components, we can create our timespec value.

using std::chrono; // for example brevity

constexpr timespec timepointToTimespec(
    time_point<system_clock, nanoseconds> tp)
{
    auto secs = time_point_cast<seconds>(tp);
    auto ns = time_point_cast<nanoseconds>(tp) -
             time_point_cast<nanoseconds>(secs);

    return timespec{secs.time_since_epoch().count(), ns.count()};
}

Bonus: timeval conversions

Another common time structure with POSIX systems is timeval, which is defined in the sys/time.h. This type is very similar to timespec:

struct timeval
{
    time_t         tv_sec;
    suseconds_t    tv_usec;
}

We can convert between timeval and std::chrono types in the same manner shown above, except std::chrono::microseconds is used in place of std::chrono::nanoseconds.

using std::chrono; // for example brevity

constexpr microseconds timevalToDuration(timeval tv)
{
    auto duration = seconds{tv.tv_sec} + microseconds{tv.tv_usec};

    return duration_cast<microseconds>(duration);
}

Further Reading

Related Articles