Global complexity, local simplicity

Models created using the modelling techniques we talk about on this site are extremely scalable. In fact we call them infinitely scalable. This corresponds to the classification in Martin Fowlers Patterns of Enterprise Application Architecture (The Addison-Wesley Signature Series) book of the Domain Model pattern. Maybe it is something readers have overlooked, but the curve…

Models created using the modelling techniques we talk about on this site are extremely scalable. In fact we call them infinitely scalable. This corresponds to the classification in Martin Fowlers Patterns of Enterprise Application Architecture (The Addison-Wesley Signature Series) book of the Domain Model pattern. Maybe it is something readers have overlooked, but the curve for the Domain Model is assumed to be linear.

However there seems to be a price to be paid for this scalability:

  1. models will usually grow to be almost impossible to comprehend
  2. models will contain more and more redundancy
  3. the size of the model will have a negative impact on performance
  4. … and will all of this not lead to a downgrade of scalability since adding new functionality will become difficult?

In this article we will try to explain why we think that, in the context of models endowed with the characteristics of a Domain Model, these problems are not valid or at least much less relevant.

Fowlers three main patterns

There is an interesting observation about unit tests within the XP (eXtreme Programming) context. Unit tests are usually extremely local: as a rule there is a set of unit tests for each public interface in each class.
For example if we have a class Person, containing the public interface move(newLocation), there could be tests like testMove. The example warrants only one test, but slightly more complex messages could have one or two more (note that if you need more than three test methods for one message you probably have a candidate for refactoring since the method is too complex anyway).
Now if we evolve our system, we do not write tests for larger or overall processes. Just the combined effect of local unit tests will guarantee that the system as a whole, no matter the size, will behave correctly.

This is an effect of synergy. However to work as intended, we need to learn to work with complexity instead of against it. Especially technically oriented people have an innate fear of complexity. They often talk about reducing complexity. And indeed, in many situations this has proved to be a valuable strategy because it let them avoid paths leading to disaster having to deal with unmanageable code. However the net effect in practice is that they gradually introduce so much code to “reduce” the complexity that they end up accomplishing the exact opposite.

Real-size systems that are deployed can be seen to consist of considerable amount of code that “does nothing”: it is only there to make the system manageable, to “reduce” complexity. It is there because the builders would be unable to find bugs, to repair them, or to add new functionality. It is code that contributes zero to the end user functionality. What if we were able to build systems that consist of relatively independent, small, understandable, testable components? Components that do only what they do, without any overhead? What if we could add these components to our existing, running systems, just by “throwing them in”, as a fish in a pond? And what if our systems are continuously attempting to regain the optimum balance between those components?

Reminds you of something? The biological world operates as such, of course.

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *