3. Clearing the Fog of Cloudy Concepts
In my book, one of my core terms was “cloud objects” and I think we can be more precise about that now. My point was that management concepts like “trust” or “team” are not clearly defined entities like atoms in physics. We mistakenly reify (treat as real things) these abstract concepts, leading to poor measurement, weak predictions, and misleading certainty.
Many management concepts are better seen as “cloud objects” to emphasize their fuzziness. This follows from Wittgenstein's idea that a concept like “game” does not have a strict definition; rather, the concept refers to a collection of things that have a family resemblance.
A concept such as “trust” includes actions like sharing a secret, delegating a task, or giving someone the benefit of the doubt. These actions have a family resemblance but differ substantially. If we try to use them as a foundation for a science of management, we will run into trouble.
The Search for Ground Truth
It’s natural to think we just need to get to the bottom of things. Perhaps we can break down a broad category, such as trust into finer subcategories until we get to something solid. Sadly, when we try to do this we find that the subcategories themselves are cloud objects and usually overlap such that a given behaviour could potentially belong to several different subcategories.
Some things are indeed less “cloudy” than others and more suitable for clear classification and measurement; however, in general, we must accept that management concepts sit upon layers and layers of abstraction, and searching for a ground truth that is similar to what we have with chemical elements is futile.
It also bears noting that in the world of biology, we can have quite tidy taxonomies of species that fit into a clear hierarchy. This happens because of the causality embedded in evolution. No such strict causality brings much order to the world of management concepts.
Why Abstractions are Essential for Intelligence
I have used the term “cloud objects”, but we could also speak of categories, concepts, or abstractions—all mean largely the same thing and we can’t escape them. A fundamental requirement of intelligence is to take the mass of things we observe and simplify them into useful categories by noticing patterns, similarities, and family resemblances. We take the specific observations and create an abstract category.
We need abstractions because they simplify the cognitive or computational task of modelling the world and making predictions. It can be seen as a kind of lossy compression; we compress a mass of individual data points into a concept, and we lose some information but not to the point that the abstraction has no value.
How Machines Handle Abstraction
When we talk about how we go from some set of initial observations to an abstract category we do a lot of handwaving. It’s hard for humans to explain in a rigorous way how we create concepts like “trust”, “culture,” or “leadership potential”. With machine learning the process of creating abstractions is mathematical. We can look at how they compress the information into the “latent space” where patterns are recognized and, apparently, concepts are formed.
While we can look at every detail of what a machine learning algorithm is doing, interpreting what is going on in the latent space is far from clear. However, the field of trying to understand what the machine is doing, “mechanistic interpretability”, is a science, not handwaving. It puts us on a productive path to dealing more effectively with cloud objects. This is one of two reasons I’m hopeful about building a stronger foundation for the science of management. The other reason is best illustrated by goldfish.
Machine Learning and Goldfish Theory
If goldfish had theories of management, they would all be of the form “there are two types of X”: two types of organization, two types of teams, and two approaches to strategy. This is because goldfish are not that smart. I’m suggesting they can only manage two dimensions. Humans are much smarter than goldfish; we easily handle five, six, or seven dimensions. Almost all popular management theories have around five to seven categories or steps.
Generally, the fewer categories or dimensions you have, the less accurate your predictions will be. If you are deciding which candidate to hire, and you only consider two dimensions, then you won’t make as good decisions as someone who considers seven dimensions.
Management is so complex that even seven dimensions are not that predictive. As long as management theory is based on a low number of dimensions it will be of limited value. AI will be able to deal with vast numbers of dimensions and easily develop new abstractions. It will handle the complexity of management better than we can, although it may be hard to understand why it is doing what it does.
I admit I quietly slipped from the word “abstractions” to “dimensions”. Dimensions in the world of management might be “the degree of centralization” or “the degree of task interdependence”. Centralization and task interdependence are, of course, abstractions. In management, we bump into cloud objects no matter where we turn. An abstraction where we can distinguish high from low can be used as a dimension.
Advice for Scientists
For academics creating a science of management, the mathematics of machine learning may provide a kind of microscope to investigate concepts more concretely. Dive into these learning mechanisms to see what they teach us about abstractions and compression. Be the person known for muttering, “The secret lies in the latent space.” Not only may we come to better understand the cloud objects we discuss in management, we may discover new abstractions that are more precise and powerful.
Advice for Engineers & Practitioners
Practical managers who were never exposed to the rigour of science and engineering probably never fretted once about cloud objects. They knew the world seemed fuzzy and they didn’t particularly care. For those unfortunate managers trained in science or engineering who expected to find a solid foundation in management theory, the advice is to forget those dreams. Like Joni Mitchell, it’s only clouds' illusions you’ll recall; you really can’t know clouds at all.
Perhaps the key is discovering the abstractions that happen to be useful to your situation at a particular moment. Have you ever seen a wise, experienced manager look at a situation and say something completely out of the blue such as “There are two approaches, the looking inward approach and the looking outward approach”? They had never used that concept before; however the wise manager was able to invent it on the fly. An AI may be a useful partner in inventing fresh management frameworks that help make sense of a unique situation. So, with folk wisdom, we had AI searching through existing frameworks to provide insight. With this new “abstraction first” approach, we will ask AIs to invent a new framework tuned to the situation—and this is beginning to sound like Taleb’s fractal localism.