It’s happened before and it’s happening again; I’m reading the right book(s) at the right time.
I recently finished Mark Burgess’s Smart Spacetime. It was my third attempt—the first being at least a year ago, with no explicit interest or understanding of computing, the second several months after as a night-shift read with a more developed interest in computing. I’m now onto A Treatise on Systems, the two-volume, updated version of Burgess’ early 2000s book, Analytical Network and System Administration.
Smart Spacetime and A Treatise on Systems have been particularly appropriate for my current situation. First, I’m thinking about experimental design in the context of A/B testing—lightweight, standardised processes, Bayesian versus Frequentist philosophies, multi-variate testing, multi-armed bandits, quantifying uncertainty, confidence and statistical power. Second, I’m thinking about reporting procedures and methods for identifying relationships within and across datasets.
However, neither experimental design nor data interrogation is what caught my attention a few mornings ago. This is what stood out; a passage from the first volume of A Treatise on Systems:
“In a clear sense, science is about uncertainty management. Nearly all systems of interest (and every system involving humans) are very complex and it is impossible to describe them fully. Science’s principle strategy is therefore to simplify things to the point where it is possible to make some concrete characterisations about observations. We can only do this with a certain measure of uncertainty. To do the best job possible, we need to control those uncertainties.”
I was immediately reminded of a concept from Robert McKee’s Story: character versus characterisation.
Characterisation is the sum of all observable qualities of a human being, everything knowable through careful scrutiny … This singular assemblage of traits is characterisation . . . but it is not character.
True character is revealed in the choice a human being makes under pressure…”
The characteristics of every human being are readily accessible and simple to describe. And generally, it’s considered possible to glimpse the character of a human being: simply observe the choices they make—I believe this works even in the absence of pressure. The idea of “how you do anything is how you do everything” applies here. It may even be possible to more thoroughly understand the character of a human being: simply observe the choices they make under significant duress. “Significant duress” can be both positive and negative in form, from physical, mental or moral challenge to unprecedented prosperity.
There’s an upper limit to our ability to either predict or know a human being’s character, however. I quoted Primo Levi in Strength, the afterword to Ss, as saying:
“No one can know how long and what torments his soul can resist before crumpling or breaking. Every human being has reserves of strength whose measure he does not know; they may be large, small, or nonexistent, but the only means of assessing them is severe adversity. Even without invoking the extreme case of the Sonderkommandos [the inmates responsible for removing corpses from the gas chambers post annihilation], we survivors commonly find that when we talk about our experience our listeners say, ‘In your place, I wouldn’t have lasted a day.’ This statement has no precise meaning; you are never in someone else’s place. Each individual is an object so complex that it is useless to try to predict behaviour, especially in extreme situations; we cannot even predict our own behaviour.”
I wonder, is there an analogous pattern in how we assess the character of complex systems? Complex systems are, after all, relatively simple to characterise. Mimicking Donella Meadows, we could characterise a complex system as a collection of:
- Components that are
- Related to one another and
- Act with a collective intent
We could further characterise a complex system by more explicitly—and more formally, perhaps using mathematical notation—describing those components, their relationships, the environment the system operates in, and the system’s intended purpose. But how do we assess the character of a complex system?
In the human realm, character is never truly revealed; it is merely inferred from a given human’s actions. Additionally, the strength of this inference grows:
- As a function of the strength of a stimulus a given human must respond to
- As a function of the number of stimuli a given human is exposed to over time
Similarly, the character of a complex system is comprehensible through the lens of a complex system’s behaviour under different scenarios, from everyday operations to anomalous, edge case behaviour. Seems straightforward, right?
Gaining insight into “character” through the lens of behaviour seemed sensible to me until I realised that what we think of as “character” is just an aggregation. “Character” is the curve we fit to the log of a human being’s or a complex system’s behaviour. This has clear advantages—such as the ability to narrativise past, present and future behaviour and reduce the perceived uncertainty in interactions. But it also has clear tradeoffs—for example, an aggregation smooths out supposedly anomalous data points, even though those same data points—by virtue of their outlier-ness—reveal a lot of salient information. Aggregation is an abstraction, and it’s leaky.
This attempt to anthropomorphise complex systems doesn’t lead very far, as you can see. But sometimes, things that are close by are more valuable than the things requiring a long and perilous journey to reach. Human beings are believed to have a quality we call “character”, which is inferred based on an interpretation of aggregated, observed behaviour. The character of a complex system is—if you squint just right—evaluated in exactly the same way.
Does that change how we should interact with other human beings, or the complex systems all around us? Shockingly, I don’t know.