My gosh, everyone is now writing about Digital Twins! I should be delighted, you’d assume. Someone within Gartner must have resonated with some article of mine from 20 years ago…

A few examples:

Well, not really.

In the first place, I fear they may not have really read some article by me or otherwise they would not have made a thinking error that in my opinion is fundamental. In the second place I more and more start to believe that for a great idea, the worst thing that can happen is for it to be hyped by Gartner. Or anyone for that matter. Everyone jumping on the bandwagon, and no-one taking the time to really understand what it means, and most of all totally lacking any heuristics on how to successfully and properly apply it in practice.

We’ve seen this happen with the Internet (or rather, the World Wide Web), Java, or for that matter, even the personal computer and its reincarnation in our smartphones. I won’t go into that, except to remind you to see Alan Kay explaining what the personal computer was supposed to be:

The inventor of the modern computer

Computing as a language extension should enable us to do something an order of magnitude greater than we could already do, a human talent described by Alfred Korzybski as “time binding”.

Now we seem to finally come to the realisation that to better understand Complex Systems (capitals intended) we need to re-create them. But that is exactly what humans have been doing since the invention of language: re-creating the world in our imaginations, in our fantasies, stories, dreams and hopes! We forgot the original vision behind the modern computer and now “rediscover” aspects of it. And in the rediscovery much of the essence is lost in translation. There is a gaping black hole in the whole concept of Digital Twins.

Laziness? Stupidity? Lack of time?

Gartner positions Digital Twins by associating it with another hyped concept: Internet of Things (IoT). This is about the increased computerisation of physical things: anything from coffee makers to buildings all the way to entire cities. By computerising these “things” they can be represented in software, and this representation is what is referred to as “Digital Twins”. By internalising them as a so-called replica of the actual thing, we can connect them, reason about them, assess them, all in the “IoT”. This implies that a growing percentage of the entire internet will be these actual representations of physical objects.

But that internal representation of real world things is what language was about in the first place. Human language creates symbolic representations of things. We did this on our computing platforms but as a rule this was only in a very limited and crippled form: as data. Even John von Neumann would be horrified to see that after 70 years his article on the design of a programmable computer was still almost literally applied — and that no-one seemed to have read the second part of the paper in which he explains that it should not actually have been done this way but for lack of advanced enough hardware.

The use of computers as a enhancement device of the human faculty of language, a symbolic manipulator, requires us to not literally represent “real” things (let’s not go into a Wittgensteinian discussion on the definition), but to mirror them. In mirroring we change the objects. Yes there are coffee machines. But when we actually mirror a coffee machine, the internal representation is deceivingly morphed. We have a coffee machine, but more importantly: we have coffee. Coffee becomes alive. Living coffee (in the internal representation of course) has a goal in life: to make itself. In the best way possible, for the human that wants to drink it. Every human has distinct taste, so this is a learning process. And the mistake that the Digital Twin advocates make is that all this knowledge is represented in either the coffee machine (imagine, it needing to remember all the different tastes of its users!) or in some kind of massive sphere that is mined for “knowledge”. This is not scalable (please read “Scale” by Geoffrey West, or anything from Nora Bateson on complex systems to understand this).

The concept of mirroring, when applied to Digital Twins, implies that we mirror not only the physical things. They are to be endowed with behaviour, a goal in life. A door knows how to open and close itself, in all kinds of different scenarios such as giving access to the room for a meeting, or giving access in case of fire. This mirroring applies the pattern that I have called The Active Passive Pattern: active objects in the internal software representation become passive and vice-versa.

Digital Twins should not only be about physical things that have been endowed with chips and sensors. It should be about everything, physical things but also concepts that have meaning in the human world. A meeting is not a physical thing, but it should be part of the system: it will hold itself, invite attendees, ask the room to help, etc. 

Digital Twins should be the Mirrored World.

2 Comments

  1. Hi Rob,

    why claim that the recreation of complex systems in our imaginations started with language? Surely we can push this further back. First, linguistic category analysis (Nietsche, Lakoff) shows us that we map from abstract categories (eg monetary value, desire) into pre-linguistic physical and perceptual categories (eg height, heat), and given that we had to solve both the time-space cognition problem at the heart of predator-prey chase and evasion well before we gained language it makes much more sense to assume that developing interior models of complex systems prefigured language. Since all cognition related to the exterior is interior modeling then arguably the evolution of this ability to imagine or reason about complex systems begins in theCambrian explosion. I think Chomsky is wrong (if I understand him correctly) in asserting deep grammar as the primal cognitive facility. Modeling space-time and chunking the world into objects seem to me far more elemental facilities, developments that make evolutionary sense, unlike deep grammar.

    • Rob Vens

      Thanks Eliot, you are bold in pushing back the “internalisation” of the world that far! I must admit I have thought about this as well, but could not (yet…) find sufficient arguments to safely defend this. We could (and maybe should) have a deeper discussion on this. Let’s focus on my main argument: not only the symbolic representation, which certainly can be argued to date further back, but the symbolic manipulation, the creation of fiction, dreams. It is then not the “real” world (which, as you argued, is only a concept arising in our cognition), but more than that, different from it. Do you also think we can defend pushing that back to, say, the Cambrian explosion?

Leave a Reply

Your email address will not be published. Required fields are marked *