Or: why models don’t need to be simpler than the thing they model.

Should they be simpler? This is what almost every definition or treatise on models and the art of modelling argues: models should be simpler than the thing they model. Otherwise, what’s the use? The darned thing is too complex, right? So we create a model, we factor out the parts we don’t need, we are left with the model containing those aspects of the real thing we need and nothing more, and the world seems manageable again.

Indeed it seems this scientific technique is prevailing in many areas. You cannot read a textbook on physics, without encountering statements like “we do not take into consideration aspects like friction”. It is this strategy of trying to create a virtual “ideal world” that makes many scientific endeavours doable, and indeed I certainly acknowledge that this strategy has made progress of modern science possible. But let’s not lose sight of the psychological aspects of this technique. It has been applied in the Renaissance for the first time, a period in which Western culture detached humanity from nature, resulting in “power over” nature. This was the birth of “modern” science.

Let’s zoom in on this a bit more. The real world is, as we know, infinitely complex. The simple event of hitting a tennis ball with a racket, and describing the resulting movement of the ball, will require many equations and even then will only be an “approximation” of the real thing. The modelling language employed is calculus or mathematics. It can only approach, hopefully to a sufficient degree of accurateness, what really happens. This “real thing”, that which really happens, was called an “event” in a model that I often use to explain what is meant with models. This model is called the Structural Differential. It is conceptualised by Alfred Korzybski’s model below, as it hangs on the wall in my study:

Structural Differential

The Object is the thing we actually can be dealing with, the mental replica of the real thing in our nervous system. Unfortunately however most people do not deal with these Objects, but with an even more derived part, the element below. This is called a Label, and as you can see even less, or sometimes even none, elements of the real event are connected to this. Examples of labels are “women can’t drive” or “black people are less intelligent”. The top parabola is the Event: infinite in its elements (the holes in the hardboard). Some of these elements contain pins, and again some of these pins have pieces of rope attached to them, and again some of these cords are connected to a pin in the part below the parabola, which is called the Object.

But the downgrading does not stop here, as there can be labels of labels. Sometimes Korzybski attached the last label to the Event with a larger peg, to illustrate that the event is the highest abstraction possible at a given moment in time.

Fine, but what has this to do with lossless modelling?

Ah well, the question we should ask ourselves is: is it unavoidable to work with abstractions in the first place? Well, my thesis is that: yes, for the human brain it is unavoidable, and something we should train ourselves in so that we do it in a way that is least detrimental. But: no, for computers it is not unavoidable. Computers have a property that make them ideal for employing lossless modelling principles, and that is of course their vast memory.

But is the mere idea of lossless models not a chimera? The basic idea is to create something that is an exact copy of the real thing, but smaller. Is that possible? A zip file is an example of it. With mathematical techniques involving Markov chains it has been proven that is possible to describe a thing in a lossless way, with the description being smaller than the thing described.

But wait, there is more.

Complex systems exhibit something called “emergent behaviour”. These emergent properties are a typical effect of complex systems. An example of this has been described in my article called The Importance of Metaphors.  For an example of someone who criticises the concept, you can read  http://anthony.liekens.net/index.php/Work/EmergentBehavior. The interesting consequence of this is that one could argue that complex systems contain “more” than the thing they represent or model. Since the thing they model can itself often be regarded as a complex system, the reasoning becomes a little complex.

A relatively new discipline in computer science, merging in the science of Information Theory (as coined and defined by Claude Shannon) is Philosophy of Information (PI). What I described as lossless models is the target of Information Theory. PI relates this with computer science, cybernetics and philosophy. Especially the political and sociological philosophical aspects of the invention of the modern computer are what fascinate me.

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *