Scaling Agile means Scalable Architecture

With the growing popularity of Agile, mainly in applying scrum for IT development, issues need to be tackled relating to scaling up the development effort. Scrum came to birth in small teams that had a lot of mandate, typically 3 to 10 person teams. These days we see Scrum used for major efforts, involving hundreds…

With the growing popularity of Agile, mainly in applying scrum for IT development, issues need to be tackled relating to scaling up the development effort.

Scrum came to birth in small teams that had a lot of mandate, typically 3 to 10 person teams. These days we see Scrum used for major efforts, involving hundreds of developers, testers, and what not. An example of that “scaling agile” is the DevOps teams originating with the Dutch bank ING. Another is the Spotify approach (with its main instigator Hendrik Kniberg who was part of the original team at General Electric where Scrum started, but where also Extreme Programming and Test Driven Development was used).

At the moment we can see several “codified” scalable agile frameworks emerge:

  1. Spotify (originated from the company of the same name)
  2. DAD (Disciplined Agile Delivery)
  3. SAFe (Scaled Agile Framework)

The success in terms of productivity and quality in those small teams was noticeable, and we have a solid body of evidence for the effectiveness of the approach in that context. That body of evidence is still lacking in the scalable approaches. In fact, we see a re-evaluation of the standard scrum practices. Google on “is scrum/agile dead” and you will see what I mean.

That is a good thing. Scrum has now been around for 20 years, and it is only healthy to go back to what scrum originally tried to achieve and evaluate those goals. There are people who are scrum adepts that see the entire “scaling scrum” as a hype. They see scrum as inherently small. However, do not underestimate the power of small groups! As Alan Kay once remarked, a team of 10 can do much more in a period of 5 years than a team of 500 (he was referring to the Software Research Group at Xerox around 1975, in comparison to the massive Java effort at Sun around 1995). Those critics ask the question: “is it really necessary to scale up that big?”

Questions are good.

In this blog post I want to add an aspect to the discussion that I think might be overlooked. The focus is on the process (Daily Standup), the teams (Squads and Tribes), how things are done (Sprints), and the tools (scrum boards). However there is a dependency that is crucial to take into account, and that directly impacts the success of both the small (regular) scrum teams as the scaled ones. That dependency is architecture.

Agile and scrum put a lot of emphasis on small increments that deliver complete functionality. A sprint may last only two weeks, but at the end a result is delivered that is tested and ready to go into production. If the customer at any moment in time says: “budget’s up. Give me what you have” he can walk away (after concluding the running sprint) with a product that works for those parts of the functionality that have been tackled.

This can only work if the functionality can be decomposed into small parts that have minimal dependencies among each other. A scrum team learns how to create functional specifications that conform to this requirement. They become better at it in time, but sometimes it is not so easy. You may find, as Kent Beck did on the General Ledger application with General Electric, that the whole architecture is becoming an impediment for further evolution. Too many dependencies, especially transitive ones (if I change one component in the chain, another one further away breaks), made it more and more difficult to break down functionalities into bite-sized bits (that is, able to fit into one sprint). That is when Kent decided to throw away the entire thing, only months before final deployment, and completely rebuilt it with a more resilient architecture.

The fact that this rebuilding was done in only three weeks is revealing. That does not say much about the architecture, although having an architecture that was mostly loosely coupled helped with re-using most of those components, but more about the quality of those components as a result of sprint-based development: fully tested, with clear and crisp functional boundaries.

But the architecture got in the way.

What we need to realise is that not only the process needs to be agile, but also the architecture. Sometimes the process helps in keeping the architecture so. We are building those small increments right? But often this is not scalable. We need to be more aware of the need for the architecture of our solution to be agile, to be composed of loosely coupled, minimally sized components, implementing one responsibility, and related with each other using clearly defined delegation of responsibilities. This is what object-orientation has been teaching since 1972. Remember: it was Smalltalk that implemented object-orientation. It was Smalltalk that was used in the scrum project at General Electric. Object-orientation was and is at the root of it all. I find it hard to visualise a loosely-coupled, high cohesion system without using object-orientation but I’m sure it is possible, but at any rate: when scaling agile processes, scaling your architecture in an agile fashion becomes more and more important. We need to move from this structure:

to this structure:

A graph of minimally interconnected elements.

That is what we want to map onto our sprints, that is how we can “evolve” our systems, that is how to create scalable solutions, whether it is software, societies, or enterprises.

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *