The Live Domain

Prompted by an old discussion on Reddit (link expired), kicked off by the question: “Please explain OOP to me in the language of a five-year old”, I found myself musing on the reasons computing seems to stay stuck in the Middle Ages. It is not for lack of great minds or vision. In fact, I…

Conway’s Game of Life

Prompted by an old discussion on Reddit (link expired), kicked off by the question: “Please explain OOP to me in the language of a five-year old”, I found myself musing on the reasons computing seems to stay stuck in the Middle Ages.

It is not for lack of great minds or vision. In fact, I have the impression, the problem seems to be that nobody ever seems to read anything. Why is that? Is there some kind of unspoken consensus that, since we are in the area of computing science which is soooo new and changing so rapidly, that anything older than 1 year should already be considered obsolete?

Let’s catalogue a few of the main misconceptions, shall we?

It is all about automation

Basically, this misconception is foremost in the minds of most people wrestling with IT in the context of business or society. We see computers as a kind of machine. A machine is seen as something that helps us, humans, do things. What things? Well, things we already did, such as computing (the mathematical kind), searching, manufacturing, accounting, writing.

If this is your underlying (and, usually, totally subconscious) assumption, then it is almost impossible to see the hidden treasures (and, yes, dangers, but we will come to that later). You live in a world that does not need to change, really. Or rather I should say: you do not need to change. Everything remains as it was, it is just that computers are doing some of it. You do the same things. It is the old Western metaphor of the inanimate world just being there to serve us, superior and in some ways detached, human beings. Computers, the internet, it’s all just machinery. It is like a steam engine or clockwork. More complex perhaps, but intrinsically the same.

Older than 1 year is irrelevant

We live in an era of change. At least that is the slogan. Constant change, and we are wrestling to keep up. As individuals, as enterprises. We are constantly introducing “new” ways of coping with those changes, we are officially adhering to the religion of change.

And since these changes are so ubiquitous, we can not rely on the past any more. Past knowledge or experience no longer applies, so anything thought of in the past has become irrelevant.

Computing is an independent discipline

Illustrated proverb- Blind men and an elephant
The Elephant and the Blind Men

Well, for that matter, I have a deeply disturbin feeling that any discipline is seen as an independent one.

Did any of you read Isaac Asimov’s book “Foundation” (part 1 of the famous Foundation Trilogy)? Maybe not. The book started to take form in 1942. But it talked about a problem Asimov, as a biochemist, was acutely confronted with: the splintering of scientific disciplines akin to the blind men and the elephant. Each discipline functioning in a silo, and the scientific community endlessly repeating what other scientists have said earlier.

Ethics don’t really come into play

When I started studying physics and mathematics on the university of Utrecht, The Netherlands, I was shocked to find that I was the only student in my year taking on a parallel course in “Philosophy of Science”. I could not understand how my fellow students thought they could be effective scientists (I must admit, at the time my main ambition was to be a famous scientist) if they were not taking into account the broader view, in fact as broad as is feasible for a mere individual.

It’s about data

This is one of the misconceptions that I worry about most. It creates an endless stream of misery and problems (it’s too much, there are privacy concerns, and all this stuff about semantics, how to structure it, to name a few). Why is it about data? Or, still awful but slightly less so, about information? There is already too much information! (maybe you like my article on The Inversion of Big Data, or, about the undervalued distinction between data and information: Business Intelligence, an alternate definition)

The solution

The solution has been around for a long long time. Almost unnoticed, although not really because a malformed and drilled-down interpretation of it is what you currently see all around: personal computers are a direct offspring of one of the most misunderstood projects of the past century, done by the Learning Research Group at Xerox PARC.

A central concept that was realised by that group was something called a live domain. In fact this domain, which bootstrapped for the first time in October 1972, was not just live in the domain part, but everything was live: “turtles all the way down”: the operating system, the application development environment, even the compiler! Maybe the world is not yet mature enough to embrace that concept wholly, but a part of it is, and it is about time too.

That part is where the business logic or domain logic of the architecture lives.

Every architecture is wrestling with the problem of where to put the business functionality. Even how to discover that functionality is a problem, resulting in an endless stream of books purporting to offer help.

The centre of any logical architecture should be a simulation model of the organisation. What do I mean with a simulation model?

  1. It is executable and able to run independently from other components (such as a database or a front-end)
    1. “independently” is implemented by “connecting” those components with event publishing on state changes
  2. It “reflects” the real organisation (but there is something magical in the reflection: The Mirrored World) (ed. 2022: related to the recent term “Digital Twin”)
  3. It is time-aware: changes of state are events on one or more timelines (time warp should be possible!). Every state is time-bound!

It is quite a different thing to have a simulation model instead of a reified data model, which is what almost every organisation currently has. Not even an information model, mind you, but a data model. It is left to the viewer, the user, through a handicapped tool called a front-end, to make something of the mess, hopefully approaching something that can be called information.

I will leave it up to the reader to come up with the infinite possibilities that will be exposed if you do this – I will come back to you on this in a later article. In the meantime: why did I show the picture above this article? Anyone?

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *