The idea that any system can be said to have its own ontology – its own distinctive way of carving up reality – means that its structure implicitly, as a mechanical consequence, and without consciously reflecting on it, distinguishes between “inputs” and couples these to certain “outputs”. This coupling is an adaptation to circumstances that prevailed during its evolutionary past, in which the system interacted with other systems. Similarities and differences in how these stimuli disturbed the system and could be offset by the system’s own responses were thus gradually engraved in its structure, which silently and obliviously came to embody categories of input and output.
Because these category demarcations contribute to the system’s stability and survival, they are useful. They are results of patterns in the environment that have impinged on the holon, and therefore reflect historically reliable features of its environment. But there is nothing ethereal about the demarcations that guarantees that they reflect features infinitely reliably into the future. One day, the system may wake up to discover that the environmental feature no longer behaves as predicted, and that the action it has coupled to this input consequently is maladaptive. From this feedback it may adjust the category boundaries, which as a result are fuzzy and fluid at the edges.
Sometimes, however, the patterns of the environment are so consistent and the categories so crisp so as to mislead the holon into experiencing them as absolute and God-given. The brain connectivity of metaphysicists represent fuzzy categories that link sensations to actions, but the illusory absoluteness of these boundaries lead them to conclude that they are perfect representations of external reality. They therefore coined the concept of “realness”, and the discipline of “ontology” for discussions about whether an entity possessed this property or not. Unfortunately, we can only know of things because our brains engage in complicity, so “realness” is an unattainable abstraction. However, in terms of serving the system, some category boundaries may still be more appropriate than others. “Realness”, therefore, is actually “reliability” masquerading under a more pompous moniker.
This is made more concrete when we consider genetic evolution – the kind of complicity that is mediated by nucleic acids – though the same reasoning would apply to any system that is capable of adapting. In the phase space of evolution, as with complicity in general, there is no such thing as global maximum-point. Species evolve only as far as selection pressures force them to evolve, and not towards phenotypes superior to all conceivable forms of competition (the species is only ever exposed to an infinitesimal subset of these). Eventually, to protect it from competition, the species settles into a niche – a particular lifestyle – by evolving specialized structures for detecting environmental features and efficiently linking these to actions. An organism is not under pressure to represent allthe features of reality, only the ones that threaten its current standing in relation to other species, which themselves are subject to change. Evolution is an inherently fickle game that keeps on changing its own rules for what is a winning strategy, and the input-output couplings of an organism is the strategy that has been sufficient for survival until the present.
Humans have, for example, unlike bats never been under pressure to evolve echolocation and therefore cannot pick up air-compression waves. Humans have also done just fine despite that the range of electromagnetic waves we are receptive to is truncated to a modest 390-700 nm, less than a ten-trillionth of the actual spectrum, while snakes and insects can see infrared and ultraviolet light, respectively. Nor have ever found it high-priority to develop a sensitivity to the odor of butyric acid, which the tick uses to locate the sebaceous follicles on animals. However, because our ecological niche makes us dependent on subtle social interaction for our survival, we have an innate sensitivity to human faces, which makes us experts at recognizing people and their emotions. In short, our category boundaries are tuned to features of the environment that have adaptive significance to us and are relevant to our niche.
To capture the fact that different organisms – from ticks to metaphysicists – pick up different signals and assume these to constitute the objective reality, the biologist Jakob von Uexküll introduced the concept of an “umwelt” – the animal’s subjective universe that it normally never seeks beyond. It carries the rather humbling implication that the vast, unimagined majority of all there is goes undetected. We cannot conceive of a reality with echolocation any more than a blind person can conceive of a reality with light. Organisms may share an ecosystem, but their theories about the world may be unrecognizable from one another. Not even apparently transcendental properties like space and time are immune from this. For example, simple organisms have no way of identifying distant objects – their spatial “awareness” is limited to that mapped directly on its own body. Complex animals, meanwhile, can model their environment in 3-D, and as a result perceive space as a volume. As for time, a human second corresponds to something like 10 fly seconds: a light-bulb flickers, and a human palm targeting a fly is like a fast-running car it may calmly make way for. Similarly, the speed of thought may be dazzling for a human thinker, but for the neurons instantiating it, it is like a slow-moving bureaucracy.
As a more subtle example of how our perception of reality is intimately tied to how we interact with it, consider how we tend to experience the level of abstraction that we interact with the most – the “basic level categories” – as the most “real” ones. Basic level categories tend to have a distinctive gestalt, and the most features in common. In experiments, these are the categorizations we make most readily and developmentally they enter the lexicon first. For example, “chair” feels like a more natural and real category than the more general “furniture” or the more specific “kitchen chair” because “chair” is the highest level at which we interact with something in roughly the same way, and therefore are the most salient ones in our umwelt.
Specifically, it makes sense for an organism’s umwelt to be dominated by objects and relations that furnish opportunities for performing an action to attain its niche-specific goals, things that have functional meaning. We may think of umwelt as a computer interface, and evolution is like an interface-designer, trying to make widgets that enable user actions as discoverable as possible. This idea has been labelled “affordance” by psychologist James J. Gibson. A staircase is an affordance for a human, and figures as a concept in his subjective reality, but it is useless for an insect, and therefore not part of whatever conceptual repertoire an insect can be said to possess. Visual attention also seems to be partly affordance-governed: if you track eye-movements of a person engaged in some task, you will find that they are focused on corners, knobs and handles – loci of potential manipulation. Perception is strongly action-oriented.
I’m sure you would agree that it would be awkwardly anthropocentric to give entities that are visible or salient to humans any sort of privileged ontological status, and it would be equally unfair to dismiss things like “atoms” and “concepts” as unreal just because they are intangible and unobservable with the naked eye. Equally valid are those entities experienced by other umwelts, as well as all potential umwelts that have not yet evolved and never will. However, humans are special in that they may extend their umwelts to include new patterns by other means than evolution. Humans possess language, which permits them to associate perceived patterns with arbitrary symbols. An effect of this dual coding is that patterns are more easily retrieved, contemplated and communicated, and by expanding our vocabulary, our sensory systems becomes sensitized to increasingly abstract features of our environment, making our categories more and more fine-grained. We may, for example, train ourselves to discriminate among fine wines from their earthly aromas, or cultivate our intellects with high-brow terms like “cubism”, “proxy war”, and “umwelt”.
The inanimate-animate continuum of hierarchy theory prompts us to be extremely generous in ascribing subjective realities to holons, so let’s go as far as considering man-made devices to have umwelts. Our own umwelts are most dramatically expanded by the development of measurement technologies, which range from telescopes to litmus paper and psychometric scales. A measurement is a three-component interaction, in which one system (the physical variable) stimulates another system (the instrument), so that it causes the latter to respond in way that a human experimenter can interpret as a numerical value (an umwelt-category) and use this for comparisons that reveal patterns between samples, which are meaningful for manipulating the world, turning it into an affordance.
Just like our evolutionary past has honed our photoreceptors to reliably correlate their firing to specific light frequencies, the development of measuring instruments is a kind of quasi-evolutionary process of painstakingly adjusting the instrument’s umwelt and response so that the same input quantity consistently produces the same output quantity across time and space. Such reliability-testing gives us confidence to assume that the change we may find when we compare the measurements of two different samples reflects an actual difference. However, the instrument, though consistent, may link the same input quantity to a different output quantity if, for example, it is imperfectly calibrated for one of the samples, making such a comparison meaningless. For example, it wouldn’t be very meaningful for a color-sighted and a color-blind person to argue about whether or not a traffic light turned green. Such a measurement bias is known as “systematic error” and shows the subtlety involved in establishing that a data pattern correlates to an invisible pattern that we may have to incorporate into our umwelt.
Because of the statistical origins of both our biological perception system and their artefactual extensions, measurement may reduce our uncertainty about the realness of some pattern, but never entirely eliminate it. After removing systematic error sources, there will still be fluctuations (“random error”) caused by factors that the measuring instrument has not been under pressure to keep track of. Things like temperature, light, and quantum events may cause the obtained value to deviate from its “actual” value, giving this abstract construct a hovering, virtual existence. Except in trivially discrete examples like counting the rabbits in front of you, there is just no such thing as a perfect measurement, so to unearth systems in objective reality, statisticians will have to collect a whole set of measurements for the same thing, average them so as to represent this set with a single score, and then calculate how much it varies (a value known as “standard deviation”) to estimate how much uncertainty this mean value glosses over. The secrets that reality whispers via its imprint on an umwelt are sometimes spoken in a voice so feathery that it is drowned by surrounding noises, making holons implicitly perform statistical calculations.
Another source of measurement uncertainty is that we can quantify continuous things like length and mass only to a finite level of precision. The physical quantities stored in vaults that previously defined standard units are themselves uncertain and in flux. In fact, even when basing quantities on physical constants our knowledge is limited to a certain number of significant figures. Although we in principle could express our height in the passport in terms of Angstroms (one billionth of a meter), as measured by an atomic force microscope, we usually find recording differences more fine-grained than a centimeter superfluous, and instead use a meterstick. By attending to the smallest differences, we risk missing the forest for the trees.
This reflects fundamental trade-offs in how system boundaries filter physical differences: those of “grain” (spatiotemporal resolution) and “extent” (scope). With fine-grained categories, the smallest distinguishable entity becomes smaller, but we also become worse at detecting patterns at coarser scales. For example, by blurring an image, you reduce its pixel-content, causing fine patterns go lost, but more abstract patterns to be enhanced. By zooming in on an image, you shift what becomes a focal entity and the undifferentiated background, in other words its extent. As a consequence, you can move across scales, from holon to holon. A system, such as an experimenter, may extend its umwelt to include a new external system only if it filters it with the right degree of acuity.
In conclusion, given any adaptive system, there will be self-organized categories of interactions whose boundaries depend on how they affect the system’s survival, and together they constitute an ontology. Perhaps the only metaphysical fact we can state with some semblance of confidence is that reality contains differences, some of which make a difference.