Category Archives: Agile & Architecture

Agile Development & Software Architecture

An AI Scares Itself, and Me

Dystopian Landscape
Resolution: 1792 x 1024

Just how bleak can an AI’s world view become?

One of my clients asked me to write an article on the environmental impact of generative AI. Like a lot of large corporations they are starting to embrace GenAI, but they are also a leader in sustainability, aiming to be one of the first major financial organisations to Net Zero, and so they are rightly worried about the potential adverse effect of large-scale use of GenAI. I’ve already been doing work with them to measure and to better understand the power consumed by GenAI, so the article is an obvious next step.

I’ve already written several articles for their internal Net Zero blog, and to add a bit of interest to each I’ve asked ChatGPT to generate a short essay on the topic, in the style of a well-known author. This appears to be a task at which ChatGPT excels, and I saw no reason not to do the same again.

God, That’s Bleak…

I like the way ChatGPT mimics the very direct writing style of Hunter S Thompson, and I like to think that if he was still with us, Thompson would be working to expose shortcomings on all sides of the climate change debate. The prompt to ChatGPT was therefore a simple one: “Write a short essay about the risks of AI with a focus on the environmental impact, in the style of Hunter S Thompson”. This was the result:

A Savage Journey Into the Heart of Techno-Doom: AI, the Environment, and the Goddamn Reckoning


We live in a time of extraordinary madness, where artificial intelligence (AI) has become the darling of our cursed, frenetic age. It’s a glittering gem, a tool of infinite promise—yet lurking beneath this polished veneer lies the venomous bite of catastrophe. If you think AI is just another shiny toy for the tech elites to fiddle with, you’re in for a rude awakening, my friend. The real story here is not just the dystopian nightmares of machines outsmarting us; it’s the environmental havoc that this digital beast is ready to unleash on an already bleeding planet.

Let’s be blunt—AI is a power-hungry, resource-guzzling monster. It doesn’t just run on hopes and dreams; it feasts on data and electricity, and the price is paid in the blood of the earth. Picture it: vast data centers sprawling like cancer across the globe, consuming as much energy as entire nations. These steel-and-silicon behemoths are the new factories of the digital age, but instead of churning out cars or canned goods, they spit out algorithms and predictions. And just like the smoke-belching mills of the industrial revolution, they’re leaving a trail of destruction in their wake.

But who cares, right? So what if AI needs a few gigawatts here and there? We’re advancing, after all! The future is here, and it’s smarter than ever—never mind that the goddamn planet is burning. That’s just collateral damage, a minor inconvenience on the road to progress. This is the kind of twisted logic that’s driving us headlong into the abyss. The tech lords promise us a utopia of endless convenience, while quietly brushing the environmental costs under the rug. But you can’t hide the truth forever.

Let’s get real about the energy. Training AI models, particularly those gargantuan neural networks that gobble up every scrap of data they can find, is like fueling a fleet of private jets with coal. The carbon footprint of these operations is staggering. Every keystroke, every click, every goddamn chatbot response comes at the cost of churning out more CO2. And let’s not forget the hardware—the servers, the cooling systems, the endless upgrades. They all need minerals mined from the earth, ripped from the ground in a frenzy that leaves behind a scarred landscape and poisoned water.

And then there’s the e-waste. This relentless march of technology demands constant upgrades, and when those shiny new devices become obsolete, they’re tossed aside like yesterday’s trash. But this trash doesn’t just disappear. It festers in landfills, leaching toxins into the soil, contaminating water supplies, and creating a slow-motion disaster that will haunt us for generations.

But here’s the kicker: as AI becomes more “efficient,” as it gets better at predicting, automating, and optimizing every facet of human existence, it only exacerbates the problem. The smarter it gets, the more data it needs, the more servers it demands, the more energy it consumes. It’s a feedback loop of doom, an ouroboros of technological advancement eating its own tail, and we’re all along for the ride, whether we like it or not.

So what’s the answer? Do we unplug the machines and head for the hills? Not likely. The AI genie is out of the bottle, and there’s no stuffing it back in. But we need to start facing the reality of this techno-madness before it’s too late. This isn’t just about smarter machines or even smarter humans. It’s about survival. Our planet is at stake, and if we don’t get our heads out of the silicon-scented clouds, we’re going to find ourselves deep in the muck, wondering where it all went wrong.

In the end, AI might just be the ultimate test of human intelligence—a test of whether we can outsmart our own creations before they outstrip our ability to live on this Earth. If we fail, the machines won’t need to destroy us. We’ll do it ourselves.

 

Bleak, or what?

And how do I teach myself to write like that without an AI?

Cheerful, but Profound

I liked it, but unsurprisingly my clients were not that keen, especially as there’s a current internal campaign about the benefits of AI. So I tried again. Let’s not do bleak, let’s do cheerful. As a child of the 1960s I struggle to think of anything more cheerful and unthreatening than Winnie The Pooh, so I went back to ChatGPT with a second prompt: “Same again, please, but in the style of Winnie the Pooh”. Usually the “same again” instruction or similar resurfaces the same concepts, just wrapped in different language, but this time ChatGPT did something quite profound…

A Thoughtful Little Reflection on AI and the World Around Us

One fine day in the Hundred Acre Wood, while sitting on a sunny spot, Pooh Bear began to think—very hard, which is not something he did often, mind you, but this was a special kind of thinking. You see, there was talk in the Wood about something called Artificial Intelligence, or AI, which sounded very important and rather clever. So, Pooh decided he ought to try to understand it, especially since it seemed to be making quite a fuss in the world beyond the trees.

Now, Pooh was a bear of very little brain, as we know, but he had a great deal of heart. And when he thought about this AI, he began to wonder not just about what it could do, but about what it might mean for the world—especially for the bees, the flowers, and all the other important things that made life in the Hundred Acre Wood so delightful.

“AI,” said Pooh to himself, “must be a very busy sort of thing, always thinking and learning. But if it’s thinking and learning all the time, it must need a lot of honey to keep it going!” But of course, AI didn’t eat honey. It ate something else—electricity, and lots of it. And that gave Pooh pause. He remembered how much work it took to make a pot of honey, and he wondered how much work it must take to feed all that electricity to AI.

Pooh scratched his head, thinking of the bees buzzing in the flowers, gathering nectar bit by bit. “If AI needs so much energy,” he thought, “where does it all come from? And what happens to the Wood if we use too much?”

You see, Pooh had heard that the world was getting warmer, a little more each year, and that too much energy-using could be a rather bad thing. The flowers might not bloom as they should, the rivers might run dry, and the trees might stop growing so tall and green. And that would be a very sad thing, indeed. For without the trees, where would Owl perch? Without the flowers, where would the bees find their nectar? And without the rivers, where would Piglet splash on a hot summer day?

It seemed to Pooh that while AI was very clever, it might not always be very wise. For if it wasn’t careful, it could use up too much of the things that the world needed to stay happy and healthy. It was a bit like eating too much honey in one go—not a bad thing in itself, but too much of it might leave you with a tummy ache, or worse, no honey left for tomorrow.

Pooh also thought about all the things that AI needed to be made. It wasn’t just the electricity; it was all the metal and other bits that had to be dug up from the ground, and then made into machines that could think and learn. And when those machines got old, they couldn’t just be tossed aside like a worn-out honey pot. No, they had to be dealt with carefully, or else they might end up in a big heap, making a mess of the beautiful Wood.

So, Pooh decided that while AI might be a very useful and clever thing, it needed to be used with a bit of care, the same way one should be careful not to eat all the honey at once. He thought that maybe, just maybe, if everyone thought a little more about how to use AI wisely, and didn’t let it gobble up too much of the world’s energy and resources, then perhaps the Wood could stay green and lovely for a long, long time.

And with that, Pooh decided it was time for a little smackerel of something—because even a bear of very little brain knows that sometimes, a small break is the best way to keep thinking about big, important things. So off he went, humming a little tune and hoping that the clever people working on AI would remember to take care of the world around them, just as he took care of his honey pots.

 

There are some quite deep messages in here. I particularly like “It was a bit like eating too much honey in one go—not a bad thing in itself, but too much of it might leave you with a tummy ache, or worse, no honey left for tomorrow.”

What’s interesting is the different focus of the two pieces: the dominant message in the “Hunter S Thompson” one is very much “our rush for more technology is destroying us”, whereas in the “Pooh” one it’s “AI is a good thing, but not if we use it too much”, which is much more balanced.

Sadly I haven’t yet persuaded my clients to use either version, but at least I get to share them with you.

View featured image in Album
Posted in Agile & Architecture, Sustainability, Thoughts on the World | Leave a comment

Channel Hopping Mad!

DAB Anger
Resolution: 952 x 628

Why are digital radio and TV such exemplars of a bad user experience?

In the good old days of a small number of analogue broadcast channels, watching TV or listening to the radio was a rewardingly simple process. To watch, listen or record you simply selected a channel, and you had a high expectation that the expected content would be there. The move to digital broadcast radio and television (DAB and DVB) should have increased technical quality and choice while maintaining this ease of use. Instead we have been saddled with an arcane, failure-prone process which offers a dreadful user experience, leaving many users frustrated and angry. It didn’t have to be so, and one wonders what technocratic or bureaucratic nonsense managed to create the mess we have.

I seem to spend a lot of time watching the less technically-minded people in my life struggling with this shocking state of affairs, or apologising to them for it. I myself am technically able, and I can usually get a required result but it often takes a lot more time and effort than it should. Neither is acceptable.

The basic mental model for most users is the following:

Unfortunately in common parlance the terms “channel”, “station”, and “programme” (or program, for our American friends) get used somewhat interchangeably, so I’m going to use the following definitions:

  • A “station” is an enduring stream of content from a given broadcaster. BBC1 would be a well-known British example (and hopefully all readers can think of an equivalent).
  • A “channel” is an item in the list of content streams which the device can receive. For example Channel 1 might be receiving BBC1.
  • A “programme” is an item of content, with the term “recurring programme” referring to both series and regular/daily programming.
  • A “preset” is a mechanism to quickly route back to a favourite channel or station.

After a process of “tuning” the device will have a way to present the list of channels and their related stations to the user. As there will be more than a few channels (~200 on my TV) there will be some way to scroll through the list, and a way to assign favourites to a preset. The user can call up a given channel either using its number in the list, or the preset.

A TV will also have a way of reviewing the current and future programmes by channel (the “TV Guide”), and maybe searching for a programme by name. Radios don’t tend to have this.

And we’re already in trouble. People don’t deal well with long lists, so finding what you’re interested in may be tricky. In the UK, they put the original five stations on channels 1-5, fair enough, but the HD versions which you really want are hidden lower down. On our older DAB radios you scroll through the channels one at a time and they are not in alphabetic order, so it becomes a memory test to work out when you’ve reviewed everything. If the station you want is not there it may be because it’s unavailable, it may have changed its name, or you may just have missed it. Conversely there may be duplicate or near-duplicate station names for a range of reasons including technical issues and content variations, but you’ll have to resort to trial and error to find out which one you want.

It wouldn’t be so bad if this was static, but it isn’t. There’s a constant churn:

  • New stations start broadcasting, and existing ones stop, or pause.
  • Station names change. Sometimes this is a minor change, but it can be significant if the broadcaster does a major branding change, the station changes ownership, or a franchise is re-assigned. It’s also possible for the name to remain the same but the content changes, although that’s not something which can be blamed on the DAB/DVB design.
  • Station variants change, or a given channel changes its content variant.
  • Station allocations to channels get changed
  • The technical details for a station’s signal get changed. (It’s more complicated than just a frequency, but “frequency” will do as a shorthand.)

When a station’s channel allocation or “frequency” change, then your TV or radio may no longer be able to find it. A planned recording will fail. One of the most common, and annoying ways, of detecting a change is via a failed recording of a favourite programme. Alternatively you switch on your radio or TV and the previous tuned-in station, or your presets, are either "dead air" or some completely unrecognised random content.

There’s no reliable way of finding out in advance when you need to re-tune, short of a séance or reading the tea-leaves. After the event some devices may detect a changed channel list and invite you to re-tune, but such reminders rarely tell you what’s actually happened, and they come so frequently (more than once a week in our locale) that you tend to ignore them until you find something “wrong”.

If the designers of the DAB/DVB system (as least as implemented in the UK) had thought about the user experience, or had even the slightest knowledge of integration interface design, then it wouldn’t have to be this way. For example, whether you’re consuming an API or filling in official forms the usual practice when an "interface" changes is to allow the old one to continue but "deprecated" for a short period of time while people switch to the new one. Not in the DAB/DVB world – the service gets removed from where it was on the day it moves to the new location, and you have to re-tune. During the UK’s big "digital switch over" event I had to re-tune every device in my house on a specific day, three times.

Let’s put some more technical detail behind the average user’s mental model of this system:

That may be slightly tongue in cheek, but only slightly…

So let’s think about how it might work better. Here are some principles:

  • A digital TV is a computer.
  • A digital radio is a computer.
  • Computers are good at doing technical stuff, humans aren’t. The technical stuff should be invisible to the user.
  • Long undifferentiated lists are bad. Long lists with no obvious structure or order are worse. Lists of 7±2 things are good.

The primary concept for using a TV or radio is the station. I want BBC1. Here are some things I don’t want to be bothered with:

  • Hunting through hundreds of other stations
  • Which channel or channels BBC1 is assigned to, and any changes
  • Which frequencies the channels are assigned to, and any changes
  • Technical variants of a station. I should automatically get the best available version.
  • Content variants of a station, unless I manually select one in which case that becomes my selection.

Taking these together, a simpler model emerges. First the channel list needs to become a station list – channels are a meaningless technical detail. It also shouldn’t actually be a list – that’s a very crude solution for 200+ items. The obvious solution is some sort of tree or accordion structure, so that first you choose from a short list of broad groups ("General entertainment", "News", "Movies", "Kids", "Shopping", "Adult", "Radio" etc.), then maybe from a second level, then from a shorter list. Obviously a good user interface would remember where you were last… As the objective is to make things easier for the user, there’s no reason why a station might not show up in more than one place if that’s appropriate, and on a graphical display there should be a search option.

The list shouldn’t by default show station variants, although there might be an option to drill down to those if a user really wants it. Once I’m watching a given station it should automatically show the best available technical variant (e.g. HD TV), switching automatically but temporarily to a lesser variant if required, and switching back as soon as possible. This would prevent the abomination of the BBC "red screen of death" advising you that it cannot show local content on BBC1HD and you need to manually switch to SD.

If a station has content variations (e.g. for local news) the receiver should default automatically to the most popular variant, but I should be able to manually select an alternative. Again, if my selected variant is not available then the receiver should automatically show an alternate, but only until the preferred selection is available.

This then admits a much simpler mental model, which actually addresses the needs of the user, not some arcane technical complexities:

Tuning should simply not be visible to the user in any form. If there have been technical changes which do not affect the station I am currently watching or listening to, these should be automatically dealt with in the background. I should only be told if there’s a problem which will stop me accessing a preset or favourite station.

If changes affect my current station, then in an ideal world they would be communicated to the receiver in advance, and the receiver would automatically apply them at the right time, or, even better, the old technical details would continue to work for a crossover period to allow receivers to catch up seamlessly. In a less than ideal world if the receiver is switched on and technical details have changed for the selected station then the first thing it does is retune that station, and then process other changes in the background.

If the station name has changed, but the content hasn’t, the receiver should just handle this transparently. Obviously that means that behind the scenes there needs to be some form of persistent station ID independent of visible name, but that’s something that we deal with all the time in the IT world, it’s really not rocket science.

The only circumstance in which switching on the receiver should result in dead air or a random station is if the preferred station has permanently ceased broadcasting. Nothing else.

It’s not difficult to think of a model which would actually make digital TV and radio usable, and it’s inexplicable why broadcasters and manufacturers have made such an appalling job of it so far.

In the meantime, our DAB radio sits tuned to FM, and our TVs are all tuned perpetually to channel 231 (or is it 130?) for BBC News, except on the HD ones where it’s 107, until the next change… Oh well.

View featured image in Album
Posted in Agile & Architecture, Thoughts on the World | Leave a comment

Why REST Doesn’t Make Life More Rest-full

Really Rest-full (Cuba 2010)
Camera: Canon EOS 7D | Lens: EF-S15-85mm f/3.5-5.6 IS USM | Date: 20-11-2010 15:41 | ISO: 200 | Exp. bias: -1/3 EV | Exp. Time: 1/250s | Aperture: 9.0 | Focal Length: 53.0mm (~85.9mm) | Lens: Canon EF-S 15-85mm f3.5-5.6 IS USM

As I have observed before, IT as a field is highly driven by both fashion and received wisdom, and it can be difficult to challenge the commonly accepted position.

In the current world it is barely more politically acceptable to criticise the currently-dominant model of REST, Javascript and microservices than it is to audibly assess the figure of a female co-worker. I was seriously starting to think that I was in some age-defined Luddite minority of one in not being 100% convinced about the universal goodness of that model, but then I discovered an encouraging article by Pascal Chambon “REST is the new SOAP“, and realised that it’s not just me. I am not alone.

I don’t want to re-create that excellent article, and I recommend it to you, but it is maybe instructive to provide some additional examples of the failings Chambon calls out. I have certainly fallen foul of the quasi-religious belief that REST is somehow “better because it uses the right HTTP verbs”, and that as a result the “right verbs must be used”. On my last contract there was a lengthy argument because someone became convinced I was using the wrong ones. “You’re using POST to do a DELETE. That’s wrong.”

“No, we’re submitting a request to do a delete, if approved. At some later point, after the request has been reviewed and processed, this may or may not result in a low-level delete action, but the API is about the request submission. And anyway, you can’t submit a proper payload with a DELETE.”

“But you’re using a POST to do a DELETE…”

In the end I mollified him slightly by changing the URL of the API so that the tip wasn’t …/host, but …/host/request, but that did feel like the tail wagging the dog.

Generally REST promotes a fairly inflexible CRUD model, and by default without the ability to specify exactly which items are retrieved or updated. In a good design we may need a much richer set of operations. In either an RPC approach (as outlined in Chambon’s article), or a “remote object access” approach, such as one based on SOAP, we can flexibly tailor the operations precisely to the needs of the solution.

Here’s a good example. I need to “rename” an object, effectively changing its primary key. In the REST model, I have to choose one of the following:

  • Add extra fields to the PUT payload to carry the “new” and “old” keys, and write both client- and server-side conditional code around their values, or an additional “operation” value
  • Do a DELETE (with the old key) followed by a POST (with the new one), making sure that all the other data required to recreate the record is passed back for the POST, and write a host of additional code to handle cases like the DELETE succeeding but the POST failing, or the POST being treated as a new item, not just an update (because it’s not a PUT).
  • Have a dedicated endpoint (e.g. …/object/rename) which accepts a POST operation with just the required data for the rename. That would probably be my favourite, but I can hear the REST purists screaming in the wind…

In a SOAP model, I can just have an explicit Rename(oldkey, newkey) operation on a service named for the underlying business object. Simples.

So Is SOAP The Old REST?

I’m comfortable with Chambon’s casting of REST as the supposed handsome hero who turns out to be a useless, treacherous bastard. I’m less comfortable with the casting of SOAP as the pantomime villain (boo hiss).

Now your mileage may vary, and Chambon obviously had some bad experiences, but in my own experience SOAP is a very strong and reliable technology which a lot of the time “just works”. I’ve worked in environments where systems developed in .Net, Oracle, Enterprise Java, a LAMP stack and Python cheerfully exchanged with each other using SOAP, across multiple physical locations, with relatively few complexities and usually just a couple of lines of code to access a full object model with formal schema and policy support.

In contrast, even if you navigate through the various different ways a REST service may work, inter-platform operation is by no means as simple as claimed. In just the past week I wasted about half a day trying to pass a body parameter between a Python client and a REST API presented by .Net. It should have worked. It didn’t. I converted the service to SOAP, and it worked almost first time. (Almost. It would have been even quicker if I’d remembered to RTFM…)

Notwithstanding the laudable attempts to fill the gap for REST, SOAP is still the only integration technology where every service has full machine and human readable documentation built in, and usually in a standard fashion. Get a copy of the WSDL (Web Service Definition Language) either from the service itself, or separately, and you know what it does, with what data, and, where it’s relevant to the client, how.

To extend the theatrical metaphor, in my world SOAP is the elderly retired hero who’s a bit pedantic and old-fashioned, maybe a bit slow on his feet, but actually saves the day.

It’s About the Architecture, Stupid

Ultimately it doesn’t actually matter whether your solution uses REST, SOAP, messages, distributed objects or CSV file transfers. Any can be made to work with sufficient attention to the architecture. All will fail in the presence of common antipatterns such as complex mixed data models, massive functional decomposition to too fine a level, or trying to make high-frequency chatty exchanges over higher-latency links.

Modern technologies attempt to hide a lot of technical complexity behind simple abstraction layers. While that’s an excellent approach overall, it does raise a risk that developers are unaware of how a poor design may cause underlying technical problems which will cause failure. For example while some low-level protocols are more tolerant than others, the naïve expectation that REST will work over any network regardless “because it is based on HTTP” is quite wrong. REST, SOAP and plain old web pages can all make good, efficient use of HTTP. REST, SOAP and plain old web pages will all fail if you insist on a unit of work being composed of vast numbers of separate small exchanges rather than a few larger ones. They will all fail if you insist on transferring large amounts of unfiltered data to the client, when that data should be pre-processed and filtered on the server. They will all fail if you insist on making every low-level exchange a network service when many of these should be direct in-process operations.

Likewise if you have a load of services, whether your own microservices or third party endpoints, and each service defines its own data structure which may be subject to change, and you try and directly consume and produce those proprietary data structures everywhere you need them, you are building yourself a world of pain. A core common data model with adapters for each format will serve you much better in the long run.

So Does Technology Choice Matter?

Ultimately no. For example, I have built an architecture with an underlying canonical data and adapter model but using REST for every exchange we controlled and it worked fine. Also in the real world whatever your primary choice you’ll probably have to deal with all the others as well. That shouldn’t scare you, but I have seen REST-obsessed developers run screaming from the room at the thought of having to use SOAP as well…

However, a good base choice will definitely make things easier. It’s instructive to think about a layered model of the things you have to define in a complex integration:

  • Documentation
  • Functionality
  • Data structure and format
  • Data encoding and transport
  • Policies
  • Service location and routing

SOAP is unique among the options in always providing built-in documentation for the service’s functions, data structures and policies. This is a major omission in the REST world, which is progressively being addressed by the Swagger / OpenAPI initiative and variants, but they will always be optional add-ons with variable coverage rather than a fundamental part of the model. For all other options, documentation is necessarily external to the service itself, and it may or may not be up to date and available to whoever needs it.

Functionality is discussed above and in Chambon’s article. Basically REST maps naturally to CRUD operations, and anything else is a bit of a bodge. SOAP and other RPC or distributed object models provide direct, explicit support for whatever functions are required by the business problem.

SOAP provides built-in definition and documentation of data structures and formatting, using XML Schema which means that the definition is machine and human readable, standardised, and uses namespaces and references to manage, for example, items with the same name but different uses and formats. Complexities such as optionality and alternative structures are readily defined. In addition a payload can be easily verified against the defined schema. Swagger optionally adds similar capabilities to the REST model, although without some discipline it’s easy for the implemented service to differ from the documented one, and it’s less easy to confirm that a given payload conforms. Both approaches focus on syntactic definition with semantic guidance optional and mainly through comments and examples.

In terms of encoding the data, the fashionable approach is JSON. The major benefits are that it’s simple, payloads are a bit smaller than the equivalent XML, and that it’s easy to parse into and generate from equivalent data structures in languages like Python.

However, I’m not a great follower of fashion. XML may be less trendy, but it offers a host of industrial-strength features which may be important in more complex use cases. It’s easy to unambiguously indicate the schema for each document and validate against it. If you have non-ASCII or binary data then their encoding is unambiguously defined. It’s easy to work separately with fragments of a larger document if you need to. Personally I also find XML easier to read and manually edit if I have to, but I accept that’s a bit subjective. One argument is that JSON is easier to render into a HTML page, but I’ve achieved much the same without any procedural code at all using XML with XSLT.

Of course, there’s no real need to have to choose. The best REST APIs I have worked with have the ability to generate equivalent JSON and XML from the same queries, and you choose which works best in a given context. Sadly this is again a bit too much for the REST purists, but a good solution when it works.

Beyond the functional definition of a service and its data, we also have to consider the non-functional behaviours, what are often referred to as “policies” in this context. How is the service secured? What encryption is applied to payloads and headers? What is the SLA, and what action should you take if it is exceeded? Is asynchronous or callback behaviour defined? How do I confirm I have all the required items in a set of exchanges, and what do I do about missing ones? What happens if a service fails, or raises an error?

In the early 2000s, when web services were a new concept, a lot of effort was invested in trying to establish standard ways to define these policies. The result was a set of extensions to SOAP known as the WS-* specifications: a set of rules to enable direct and potentially automated negotiation of all these aspects based on standardised information in the service WSDL and SOAP headers. The problem was that the standards quickly proliferated, and created the risk of making genuinely simple cases more complex than necessary. REST emerged as a simpler alternative, but with a KISS ethic which means ignoring the genuinely complex.

Chambon’s article touched on this in his discussion of error coding, but there are many other similar aspects. REST is a great solution for simple cases, but should not blind the developer to SOAP’s menu of standard, stronger solutions to more difficult problems.

A similar choice applies at the final level, that of locating and connecting service endpoints at runtime. For many cases we simply rely on network infrastructure and services like DNS and load balancing. However when this doesn’t meet more complex requirements then the alternatives are to construct or adopt a complex proprietary solution, or to embrace the extended standards in the WS-* space.

One technology choice is important. A professional modern Integrated Development Environment such as Visual Studio or Intellij Idea will do much of the “heavy lifting” of development, and does make work much quicker and less error-prone. I completely fail to understand why in 2018 some developers are still trying to do everything with vi and a Unix command line. When I was a schoolboy in the 1970s there was a saying “shouldn’t you have handed that in at the end of the war?”, referring to people still using or hoarding equipment issued in WW2. Anyone who is trying to do software development in the late 2010s with the software equivalent deserve what they get… It is a mistake to drive a solution from the constraints of your toolset.

Conclusions

The old chestnut that “to the man who only has a hammer, every problem looks like a nail” is nowhere more true than in software development. We seem to spend a great deal of effort trying to make every new software technique the complete solution to life, the universe, and everything, rather than accepting that it’s just another tool in the toolbox.

REST is a valid addition to the toolbox. Like it’s predecessors it has strengths and weaknesses. It’s a great way to solve a whole class of relatively simple web service requirements, but there are definite boundaries to that capability. When you reach those boundaries, be prepared to embrace some older, less-fashionable but ultimately more capable technologies. A religious approach will fail, whereas one based on an architectural viewpoint and an open assessment of all the valid options has a much greater chance of success.

View featured image in Album
Posted in Agile & Architecture, Code & Development | Leave a comment

The Architect’s USP

Standing out in the marketplace (Morocco 2013)
Camera: Panasonic DMC-GX7 | Date: 11-11-2013 17:09 | Resolution: 3064 x 3064 | ISO: 1600 | Exp. bias: -33/100 EV | Exp. Time: 1/500s | Aperture: 8.0 | Focal Length: 300.0mm | Location: Djemaa el Fna | State/Province: Marrakech-Tensift-Al Haouz | See map | Lens: LUMIX G VARIO 100-300/F4.0-5.6

Very early on in any course in marketing or economics you will encounter the concept of the "Unique Selling Proposition", the USP, that factor which differentiates a given product or service from its competitors. It’s "what you have that competitors don’t", a key reason to buy this one rather than an alternative.

With the current trend away from development specialisms such as architect towards relatively homogenous development teams, it is perhaps instructive to ask "What is the architect’s USP?" Why should I employ someone who claims that specialism, and give him or her design responsibility, rather than just expecting my developers to cover it?

I have written elsewhere about why I don’t buy into the ultra-agile concept of "architecture emerging from the code", any more than I would bet money on the script for Hamlet "emerging" from a finite group of randomly typing monkeys. (Of course, if you have an infinite number of monkeys then it’s more achievable, but that’s infinity for you…) However that argument is about process, and I believe that almost irrespective of process a good architect’s skills and perspectives can have a significant beneficial effect on the result. That’s what I want to explore here.

The Architect’s Perspective

One key distinction between the manager, the architect and the developer is that of perspective. As an architect I spend a lot of time understanding and analysing the different forces on a problem. These design forces may be technical, or human: financial, commercial or political. The challenge is to find a solution which best balances all the design forces, which if possible satisfies the requirements of all stakeholders. It is usually wrong and ultimately counter-productive to simply ignore some of the stakeholders or requirements as "less important" – any stakeholder (and by stakeholders I mean all those involved, not just senior managers) can derail a project if not happy.

Where design forces are either aligned or orthogonal, there is usually a "sweet spot" which strikes an acceptable balance. The problem effectively becomes one of performing a multi-dimensional linear analysis, and then articulating the solution.

However, sometimes the forces act in direct opposition. A good example is system security, where requirements for broad, easy access directly conflict with those for high security. In these cases the architect has to invest heavily in diplomacy skills – to invest a lot of time understanding and addressing different stakeholder positions. One common problem is "requirements" expressed as solutions, which usually hide an underlying concern that can be met many ways, once understood and articulated.

In cases of diametrically opposed requirements, there are usually three options:

  • Compromise – find an intermediate position acceptable to both. This may work, but it may be unacceptable to both, or it may fatally compromise the architecture.
  • Allow one requirement to dominate. This has to be a senior level business decision, but the architect must be sensitive to whether the outcome is genuinely accepted and viable, or whether suppressing the other requirements will cause the solution to fail.
  • Reformulate the problem to remove or reduce the conflict. In the security example the architect might come up with a cunning partitioning of the system which allows access to different elements under different security rules.

Of course, you can’t resolve all the problems at once – that way lies madness. An architect uses techniques like layered or modular structures, and multiple views of the architecture to "separate concerns". These are powerful tools to manage the problem’s complexity.

The architect must look at the big picture, balance the needs of multiple stakeholders, and bring to bear an understanding of the business, of strategy, of technology and of development project work at the same time. If these responsibilities are split among too many heads and isolated within separate organisational confines then you lose the ability to see how it all fits together, and increase the danger of things "falling through the cracks".

The Architect’s Responsibilities

The architecture, and its resolution of the various design forces (i.e. how it meets various stakeholder needs) have to be communicated to many who are not technical experts. The architect acting as technical leader must take much of this responsibility. The messages may have to be reformulated separately for different audiences: I have had great success with single-topic briefing papers, which describe aspects like security in business terms, and which are short and focused enough to encourage the readers to also consider their concerns separately.

The architect must listen to the voice inside, and carry decisions through with integrity. For an architect, the question is whether the architecture is elegant, and will deliver an adequately efficient, reliable and flexible solution. If the internal answer to this is not an honest "yes", it is important to understand why not, and decide whether all the various stakeholders can live with the compromises.

The architect must protect the integrity of the solution against the slings and arrows of outrageous projects. (Hamlet again?) Monitor in particular those design aspects which reflect compromises between design forces, because they will inevitably come under renewed pressure over time. The architect must not only do the right thing, but ensure it is done right.

While every person on the project should be doing these things, there is a natural tendency for most to allow delivery priorities to take precedence. A developer’s documentation, for example, must be adequate to communicate the solution to other developers and maintainers, but does not have to be comprehensible to other stakeholders. However for the architect integrity, fit and communication of the solution are primary responsibilities, not optional. In addition the architect should have sufficient independence to call out and challenge conflicts of interest when they do occur.

The Architect’s Skills

The architect should be equipped with a distinct set of skills in support of these responsibilities. These will include:

  • Design patterns and knowledge of how to apply them
  • Tools and techniques to formally document both detail designs and wider portfolios
  • Methods to ensure that requirements, especially non-functional ones, are documented unambiguously
  • Methods to review a solution design, model its behaviour and confirm the solution’s ability to meet requirements
  • The ability to clearly communicate solutions, issues and potential resolutions to a wide variety of stakeholders
  • The ability to support the project and programme managers in handling the impact of issues and related decisions

Now it’s perfectly possible (and highly desirable) that others on the project will have many of these skills between them. However their combination in the architect is key to the delivery of the architect’s value, and a solution with a good chance of meeting its various objectives.

The Architect’s Position

A good architect should be able to operate in various organisational positions or roles and still deliver the above. Irrespective of the official organisation chart I often end up working between two or more groups, and I suspect this is a common position for many architects. It may actually be a natural result of adopting the architect’s unique perspectives.

The architect’s role may to some extent overlap with that of developers, analysts or product owners, and in smaller organisations or projects the architect may also take on one of these roles. In that case the architect must be able to "wear the appropriate hat" when focusing on a specific project issue or taking a wider view. The architect must then ensure that his or her ability to look at the wider picture is not compromised by the project relationship.

Conversely, a central architecture group may become accused of being in an ivory tower, separate from the realities of the business and the developers at the coal face. An architect in such a position must actively display an interest in and willingness to help with practical project issues.

A good architect will reconcile the need for a broad perspective and the specific responsibilities of a given position, thereby delivering distinct value compared with someone who has a more specific scope. I may on occasion be challenged for taking a wider interpretation of scope than others, but the insights which accrue from that perspective are almost always seen as valuable.

Conclusions

These are generalisations, and in practice there are as many variants on the architect’s role, skills and delivery as there are individuals who take the title. However it is generally true that an architect’s involvement increases the chance that a solution’s behaviour will be predictable, understood, and a good fit to its objectives. That’s the fundamental USP of the architect.

View featured image in Album
Posted in Agile & Architecture | Leave a comment

Testing vs Modelling, Detection vs Prediction, Hope vs Knowledge

The Challenge

I often hear a statement which worries me, especially but not exclusively in agile projects, along the lines of “we’ll make sure it works when we test it later”.

Now you may think this is an odd view coming from a man who has written testing courses, presented conference papers on testing and developed testing tools, but let me explain myself.

First up, there’s the old chestnut that the objective of testing is not to prove something works, but to find errors. All you can actually do by testing is locate problems to be fixed, although obviously if problems are hard to find, that increases confidence in your product. However the much deeper issue is that testing is commonly viewed as an alternative to properly understanding and documenting the expected behaviour of a system, and reviewing in advance whether a proposed design will deliver that behaviour. That can be a recipe for failure.

Obviously in some areas this is an acknowledged and viable trade-off. If we are exploring functional alternatives, or working in a problem space where extracting documented requirements is tricky, then agile development and testing is a powerful solution, and we accept the rework that may result where we get it wrong. Having said that, even in something like UI development it may be better to develop cheap models such as wireframes, and at least attempt to explore solution fit before we commit too much to code.

The problem is that when we come to the more fundamental architectural elements and non-functional behaviour, the dynamics change dramatically. The best way to show this is a variant of the testing “V Model”:

For functional details, the gap between development and testing is small, and they can quickly be reworked and retested. However some of the key architectural and non-functional aspects can only be fully tested late in the delivery process (and frequently only late in the overall programme), if at all. The “testing gap” becomes huge, the impact of any change substantial, and the rework path lengthy.

One challenge is that many non-functional tests require an environment representative of the technology and scale of the production system. If this is provided at all, it is typically late in the project, or testing has to be shoe-horned into a short window on the production system before operations commence. If that uncovers a major issue, it is simply too late.

That’s assuming that the issue is detectable. In an agile development, it may be difficult to understand “what acceptable looks like”, if there is no adequate agreed, documented definition of the expected non-functional behaviour.

The other challenge is that good non-functional testing is hard, and limited in what it can achieve. Simulating a peak load is difficult, especially with the variety of data in a real production peak. You can simulate planned and unplanned equipment failures and restarts, but by definition only predictable events. If a problem only emerges from lengthy running or a “perfect storm” event, then testing is unlikely to uncover it. Basically resilience is testable, performance may be testable, reliability isn’t. Similar considerations also apply to other non-functional aspects like security.

The Solution

The solution is to adopt an analytical and predictive approach: trying to understand, articulate and document the expected behaviour of the solution, before you build it. Importantly this is not just thinking about the solution (although thinking is vital), but thinking with models.

Models in this context take many forms. They can be diagrams, possibly based on UML, but not necessarily: for example reliability block diagrams or fault tree analyses are powerful tools to understand resilience and reliability. They can be spreadsheets, for example profiling expected transaction mixes and their relative resource requirements. They can also be active software, whether simulations of some expected behaviour, or point implementations to quantify some aspect of the solution, but the point is that their purpose is to understand the solution before a major technical commitment, not to deliver functionality. Irrespective of form all models should lend themselves to a quantitative understanding of the solution, not just “what?”, but “how much?” and “how well?”.

For example, here’s a simple redundancy scheme modelled using RelQuest, my own Visio-based fault tree analysis tool, from which we can not only understand the various combinations of failures which lead to loss of service, but the relative probability and impact (e.g. Mean Time to Repair) for each combination.

Models and simulations provide you with an early understanding of the system behaviour, so you can understand whether something should work, or not, and if not where to focus your efforts. They can be detailed, like the example fault tree above, or doing an early first pass on a platform provider’s sizing tool, but a more approximate approach may also provide value.

Numbers are your friends. I am a great fan of Fermi estimates (see the sidebar) – quick “order of magnitude” approximations to see if you have understood the key elements in a problem, and whether the answer looks viable or not.

You can easily get viable estimates of this type for performance, capacity or reliability. If the answer is “no problem”, like we can easily accommodate millions of transactions per hour on a single server and we expect thousands, then you’re probably fine. If the answer is the other way round, like the developer who proudly presented me a solution which would take 1s CPU time to do a calculation, but we needed to do a thousand a second, then the design needs to change (I got it down to 2ms, which was acceptable). If it’s marginal, then you probably need to do a more accurate model and calculation, or build a greater degree of flexibility into the solution.

Simulations or low-volume experiments may be a valid way to understand CPU, storage and memory usage, network bandwidth requirements, threading, virtualisation, and even failover behaviour. Anything which scales linearly can be measured at low volume and extrapolated, but you need to be wary of areas such as network latency or storage throughput where that may not be valid.

Ultimately anything which builds your understanding and proves that you have thought about the problems in advance is good, even if some detail may only be confirmed at later stages. The key point is that the problems become targets for analytical thinking rather than hope and prayers, and that makes them solvable.

The Conclusion

Testing on its own is absolutely necessary, but very much not sufficient. For tests to be meaningful you have to describe the predicted behaviour in advance, and for the system to have any chance of passing those tests it has to be engineered accordingly. We increasingly seek to drive functional development from written user stories and behaviour specifications. In the same way professional development must be driven by quantitative models which forecast non-functional behaviour for testing to confirm, not discover in surprise.

 

I love Fermi estimates, named for the great Italian-American physicist Enrico Fermi, who was always doing them. These are calculations which you know have a lot of inaccuracies, but which are simple enough to do quickly and get an answer which is “sort of right” to tell you if you have correctly understood the dimensions of the problem, and if something should work, or not.

Let’s do one. This is not about computing, but is an easy example to understand the process. How much does my house weigh?

Well my house is built mainly of brick, and for the purposes of this calculation can be thought of as a rectangular block roughly 8m x 12m, and about 3m high. (I happened to have these figures, but I could always just pace it out and use 1 pace = 1m). Allow for internal walls, and you could think of my house as four slabs of brick 8m long x 3m high, and four slabs 12m long x 3m. Alternatively that’s 4 slabs 20m long, or one slab 80m long. But remember that all the walls are at least two bricks thick, so it’s like one stack of single brick 160m long and 3m high. Now I know this doesn’t take any account of windows and doors, and the open plan bit at the front, but it’s also ignoring the roof and floor slabs, and I think that will balance out quite well. Google “house brick dimensions” gives us 215mm long and 65mm high, and a typical weight of 3.5kg. Divide 160m by 0.2m (this is a Fermi approximation remember) to get 800 bricks long. At 65mm high 3 bricks on top of each other will also be about 0.2m high, so the height of our stack will be 3x3m/0.2m = 45 bricks high, call it 50. That gives us a grand total of 50×800=40,000 bricks. Now 40,000×3.5kg = 140,000kg, or 140 tons. Fermi approximations are good for at best one significant figure, so round it off to 100 tons. Bingo!

So a simple model can get you a useful answer quickly, and you may even be able to do the maths in your head. Now obviously there are a lot of guesses and approximations here, like the density of all key materials is similar, and I haven’t so far accounted for the foundations, which might be needed, and I might want to double-check the typical weight of a brick which is a key value, but I’d be surprised if the “real” answer wasn’t somewhere between 50 and 300 tons.

You can easily do the same thing to get viable “order of magnitude” figures for performance, capacity or reliability.

Posted in Agile & Architecture | Leave a comment

Does Agile Miss The Point About Engineering?

A bicycle-car

A former colleague, Neil Schiller, recently wrote an excellent article, https://www.linkedin.com/pulse/agile-data-programmes-neil-schiller/, on the challenge of using agile approaches in data-centric programmes. In it, he referenced and reviewed a classic cartoon by Henrik Kniberg which is often used to promote the advantages of agile delivery:

Now it’s wholly possible that I am reading more into a limited analogy than appropriate, but I think this same diagram can also be used to explain some of the fundamental issues with agile approaches.

Think about what the bottom line is claiming: that by a set of small incremental deliveries we can somehow achieve the equivalent of transforming a scooter into a bicycle, into a motorbike and then into a car, each a fully working vehicle meeting the user’s requirements. In the real physical world this is laughable: each has a wholly different architecture with no commonality whatsoever between equivalent subsystems at any of the stages. Key properties arise from the fundamental structure – a simple tubular chassis for bike, a more complex frame including complex stressed moving parts like the engine and transmission for the motorbike, typically a monocoque chassis/exoskeleton for the car. These underlying elements form the basis, and you have to get them right as you can’t modify them later: you can’t “add strength” to a car by adding more tubes after the event (unless you are going banger racing!).

In the real physical world you create a complex engineered artefact by understanding its required properties, creating a layered structure which is designed to meet them, and then building up those layers to progressively deliver the required result. This requires that the most fundamental, least readily changed layers have to be right, and stable, early on, and only then can you add the upper more flexible elements. The first version of the process in the diagram is actually wholly correct, the second a joke.

Is it so very different for software? If we’re talking about major systems with real-world complexity and non-functional demands, I’m not convinced. The “ultra-agile” argument that it is always possible to “refactor” code to make changes. This is true up to a point, but it can be difficult and costly to change the underlying structure. If that does not meet requirements for security, or reliability, or performance, then no amount of fiddling will fix it, and if changes amount to a fundamental rewrite then it’s difficult to see where any advantage has been gained.

Obviously there are differences. The vehicle designer seeks to both create and use solutions which once right will be re-used many times (from hundreds to millions of instances), but will be difficult to change once in production. Software development is still largely about one-offs. Software requirements are typically less well-defined than for established hardware products. In vehicle manufacture, the roles of engineer/designer and constructor are distinct, whereas in software the designers often have an ongoing role in construction, and may at least subconsciously seek to extend that role (guilty as charged 🙂 ). On the other hand, the car designer knows that an approved design will be built largely as documented, whereas the software designer has no such assurance.

Fundamentally, however, I believe that software development can benefit from engineering disciplines just as much as the design of physical products. For example, it is much better to attempt to understand and predict up front how a given design will respond against non-functional requirements. Testing is a very good way to confirm that your solution basically works and to refine refinements. It is a very bad way to uncover fundamental deficiencies, especially if this occurs late in the development process.

This doesn’t mean that I don’t believe in agile development. Far from it, I am a great believer in iterative and incremental development, and structures such as Scrum sprints to manage them. However, I really don’t believe in architecture “emerging from the code”, just the same as I would not expect to see a great car design “emerge” from the work of a group of independent fabricators working on small parts of the problem without any overarching design. Cars “designed” in such a way tend to be more Austin Allegro (or AMC Pacer) than Bugatti Veyron.

Instead, Architecture has to be understood as providing the structure within which the code is developed, with that overall structure developed using engineering disciplines: assess the various forces on the design, articulate how these forces will be resolved (including what compromises are required), then document and model the solution to predict its properties.

If the requirement is for a sports car, design a sports car, don’t try and “refactor” a pushbike…

Creation of such designs, documents and models is a distinct discipline from coding. Some of this may be the domain of specialists, some may be performed by those who also have other development roles, but as a separate activity requiring appropriate skills and experience. Ironically I think Tom Gilb got it about right in his 1988 book “Principles of Software Engineering Management”, when he defined “Software Engineer” as someone who “can translate cost and quality requirements into a set of solutions to reach the planned levels”, and who has the skill to change any given quality dimension of a system by a factor of ten if required. The latter challenge would uncover a lot of people who call themselves “architects”.

In addition complex designs need some form of centralised, overall ownership and design control – this again requires specialist skills and cannot just be allocated randomly, but will sit with an Architect and/or a Product Owner.

Within such a framework concepts such as continuous integration and testing still make sense. Development, both functional and non-functional can still be managed via the backlog and sprint plans, epics and stories. However the “minimum viable product” may require completion of much of the underlying architecture as well as major functional capabilities. Major capabilities, both functional and non-functional, have to be analysed and designed up front, not left to stories somewhere in the backlog. The intermediate delivery is a car, albeit incomplete, not a complete bicycle.

Agile development and architecture are not incompatible, but complementary. Successful development of a complex real-world system will inevitably follow the first model in Kniberg’s cartoon, no matter how much the agilists would like it to be the second. At scale, and in the face of more challenging requirements software development needs to be treated as an engineering discipline, with agile structures in service of that discipline, not avoiding it.

View featured image in Album
Posted in Agile & Architecture | Leave a comment

Architecture Lessons from a Watch Collection

Early 1990s Hybrid Watches
Camera: Panasonic DMC-GX8 | Date: 21-10-2017 10:06 | Resolution: 5118 x 3199 | ISO: 1250 | Exp. bias: -66/100 EV | Exp. Time: 1/60s | Aperture: 8.0 | Focal Length: 30.0mm | Lens: LUMIX G VARIO 12-35/F2.8

I recently started a watch collection. To be different, to control costs and to honour a style which I have long liked, all my watches are hybrid analogue/digital models. Within that constraint, they vary widely in age, cost, manufacturer and style.

I wanted to write something about my observations, but not just a puff piece about my collection. At the same time, I am long overdue to write something on software architecture and design. This piece grew out of wondering whether there are real lessons for the software architect in my collection. Hopefully without being too contrived, there really are.

Hybrid Architectures Allow the Right Technology for the Job

There’s a common tendency in both watch and software design to try and solve all requirements the same way. Sometimes this comes out of a semi-religious obsession with a certain technology, at others it’s down to the limitations of the tools and mind-set of the designer. Designs like the hybrid watch show that allowing multiple technologies to play to their strengths may be a better solution, and not necessarily even with a net increase in complexity.

Using two or three rotating hands to indicate the time is an excellent, elegant and proven solution, arguably more effective for a “quick glance” than the digital equivalent. However, for anything beyond that basic function the world of analogue horology has long had a very apt name: “complications”. Mechanical complexity ratchets up rapidly, for even the simplest of additional functions . Conversely even the cheapest of my watches has a stopwatch, alarm and perpetual calendar, and most support multiple time zones and easy or even automatic travel and clock-change adjustments. Spots of luminous paint make a watch readable in darkness, but illuminating a small digital display is more effective.

The hybrid approach also tackles the aesthetic challenge: while many analogue watches are things of beauty, most digital watches just aren’t. Hybrid watches (just like analogue ones) certainly can be hit with the ugly stick, but I’ve managed to assemble a number of very pretty examples.

The lesson for the software architect is simple: if the compromise of trying to do everything with a single technology is too great, don’t be afraid to embrace a hybrid solution. Hybrid architectures are a powerful tool in the right place, not something to be ruthlessly eliminated by purist “Thought Police”.

A Strong, Layered Architecture Promotes Longevity

Take a look at these three watches: a 1986 Omega Seamaster, a 1999 Rado Diastar, and a recent Breitling Aerospace. Very different, yes?


Sisters Under the Skin, or Brothers from Another Mother?

Visually, they are. But their operation is almost identical, so much so that the user manuals are interchangeable. Clearly some Swiss watchmaker just “got it right” in the late 1980s, and that solution has endured, with a life both within and outside the Swatch Group, the watch equivalent of the shark or crocodile. While the underlying technology has changed only slightly, the strong layering has allowed the creation of several different base models, and then numerous variants in size, shape and external materials.

This is a classic example of long-term value from investing in a strong underlying architecture, but also ensuring that the architecture allows for “pace layering”, with the visible elements changing rapidly, while the underpinnings may be remarkably stable.

It’s worth noting that basic functionality alone does not ensure longevity. None of these watches have survived unchanged, it’s the strength of the underlying design which endures.

Oh, and yes, the Omega is a full-sized mans’ watch (as per 1986)! More about fashion later…

Enabling Integration Unlocks New Value

The earliest dual mode watches were little more than a simple digital watch and a quartz analogue watch sharing the same case, but not much else except the battery (and sometimes not even that!). The cheapest are still built on this model, which might most charitably be labelled “Independent” – my Lambretta watch is a good example. There’s actually nothing wrong with this model: improve the capability of the digital part, the quality of the analogue part and the case materials and design and you have, for example, my early 1990s Citizen watches which are among my favourites. However as a watch user you are essentially just running two watches in one case. They may or may not tell the same time.

The three premium Swiss watches represent the next stage of integration. The time is set by the crown moving the hands, but the digital time is set in synchronisation. There’s a simple way to advance and retard both in whole hours to simplify travel and clock-change adjustments. Seconds display is digital-only to simplify matters. Let’s borrow a photography term and call this “Analogue Priority” – still largely manual, but much more streamlined.

“Digital Priority”, as implemented in early 2000s Seikos is another step forwards. You set the digital time accurately for your current location and DST status, and you have one-touch change of both digital and analogue time to any other time zone. The second hand works as a status indicator, or automatically synchronises to the digital time when in time mode.

However the crown has to go to the Tissot T-Touch watches. Here the hands are just three indicators driven entirely by the digital functions: they become the compass needle in compass mode, show the pressure trend in barometer mode, sweep in stopwatch mode, park at 12.00 when the watch is in battery-saving sleep mode. And they tell the time as well! Clearly full integration unlocks a whole set of capabilities not previously accessible.


Extremes of analogue/digital integration

So it is with software. Expose the control and integration points of your modules to one another, or to external access, and new value emerges as the whole rapidly becomes much more than the sum of the separate parts.

Provide for Adjustment Where Needed…

While I love the look of some watch bracelets (especially those with unusual materials, like the high-tech black ceramic of the Rado), adjusting them is a complex process, and inevitably ends up with a compromise: either too loose or too tight. Even if the bracelet offers some form of micro-adjustment and you get it “just right” at one point, it will be wrong as the wrist naturally swells and shrinks over time. Leather straps allow easier adjustment, but usually in quite coarse increments of about 1cm, so you’re back to a compromise again.

The ideal would be a bracelet with either an elastic/sprung element, or easily accessible micro-adjustment, but I don’t have a single example in my collection like that. I hear Apple are thinking about an electrically self-adjusting strap for the next iWatch, but that sounds somewhat OTT.

On the other hand, I have a couple of £10 silicone straps for my Fitbit which offer easy adjustment in 2mm increments. Go figure…

We could all quote countless similar software examples, of either a “one size fits all” setting which doesn’t really suit, or an allegedly controllable or automated setting which misses the useful values. The lesson here is to understand where adjustment is required, and provide some accessible way to achieve it.

… But Avoid Wasting Effort on the Useless

At the other end of the scale, several of my watches have “functions” of dubious value. The most obvious is the rotating bezel. In the Tissot, it can be combined with the compass function to provide heading/azimuth information. That’s genuinely useful. The Citizen Wingman has a functioning circular slide rule. Again valid, but something of a hostage to progress. 🙂


At least the slide rule does something, if you can remember how!

Do the rotating bezels of my Citizen Yachtsman, or the Breitling Aerospace have any function? Not as far as I can see.

Now I’m not against decorative or “fun” features, especially in a product like a watch which nowadays is as frequently worn as jewellery than for its primary function. But I do think that they need to be the result of deliberate decisions, and designers need to think carefully about which are worth the effort, and which introduce complexity outweighing their value. That lesson applies equally to software as to hardware.

… And Don’t Over-Design the User Interface

The other issue here is that unless it’s pure jewellery, a watch does need to honour its primary function, and support easily telling the time, ideally for users with varying eyesight and in varying lighting conditions. While I have been the Rado’s proud owner for nearly 18 years, as my 50-something eyesight has changed it has become increasingly annoying as a time-telling device, mainly due to its “low contrast” design. It’s not alone: for example my very pretty Citizen Yachtsman has gold and pale green hands and a gold and pale green face, which almost renders it back to a pure digital watch in some lights!

At the other end of the scale, the Breitling Aerospace is also very elegant, but an exemplar of clarity, with a high-contrast display, and clear markings including actual numbers. It can be done, and the message is that clarity and simplicity trump “design” in the user interface.

This is equally true of software. I am not the only person to have written bemoaning the usability issues which arise from loss of contrast and colour in modern designs. The message is “keep it simple”, and make sure that your content is properly visible, don’t hide it.

Fashion Drives Technology. Fashion Has Nothing To Do With Technical Excellence

All my watches are good timepieces, bar the odd UI foible, and will run accurately and reliably for years with an occasional battery change. However, if you pick up a watch magazine, or browse any of the dedicated blogs, there is almost no mention of such devices, or largely of quartz/digital watches at all.

Instead, like so much else in the world we are seeing a polarisation around two more “extreme” alternatives: manual wind and “automatic” (i.e. self-winding) mechanical watches, or “charge every day” (and replace every couple of years) smartwatches. The former can be very elegant and impressive pieces of engineering, but will stop and need resetting unless you wind or wear them at least every few days – a challenge for the collector! The latter offer high functionality, but few seem engineered to provide 30 years of hard-wearing service, because we know they will be obsolete in a fraction of that time.

Essentially fashion has driven the market to displace a proven, reliable technology with “challenging” alternatives, which are potentially less good solutions to the core requirements, at least while they are immature.

This is not new, or unique to the watch market. In software, we see a number of equivalent trends which also seem to be driven by fashion rather than technical considerations. A good, if possibly slightly contentious example, might be the displacement of server-centric website technologies, which are very easy to develop, debug and maintain, with more complex and trickier client-centric solutions based on scripting languages. There may be genuine architectural requirements which dictate using such technologies as part of the solution, e.g. “this payload is easy to secure and send as raw data, but difficult and expensive to transmit fully rendered”. Fine. But “it’s what Facebook does” or “it’s the modern solution” are not architecture, just fashion statements.

On a more positive note, another force may tend to correct things. Earlier I likened the Omega/Rado/Breitling design to the evolutionary position of a shark. Well there’s another thing about sharks: evolution keeps using the same design. The shark, swordfish, ichthyosaur, and dolphin are essentially successive re-uses of a successful design with upgraded underlying architecture. Right now, Fossil and others are starting to announce hybrid smartwatches with analogue hands alongside a fully-fledged smartwatch digital display.

In fashion terms, what goes around, comes around. It’s true for many things, watches and software architectures among them.

Conclusions

Trying to understand the familial relationships, similarities and differences in a group of similar artefacts is interesting. It’s also useful for a software architect to try and understand the architectural characteristics behind them, and especially how this can help some designs endure and progressively evolve to deliver long-term value, something we frequently fail to achieve in software. At the same time, it’s also salutary to recognise where non-architectural considerations have a significant architectural impact. Think about the components, relationships and dynamics of other objects in architecture terms, and the architecture of our own software artefacts will benefit.

View featured image in Album
Posted in Agile & Architecture, Thoughts on the World, Watches | Leave a comment

Integration Or Incantation?

I was travelling recently with Virgin Atlantic. I went to check in online, typed in my booking code and selected both our names, clicked "Next", and got an odd error saying that I couldn’t check in. I wondered momentarily if it was yet more pre-Brexit paranoia about Frances’ Irish passport, but there was a "check in individually" option which rapidly revealed that Frances was fine, it was my ticket which was causing the problem.

The web site suggested I ring the reservation number, which I did, listened to 5 minutes of surprisingly loud rock music (you never mistake being on hold for Virgin with anyone else), and got through to a helpful chap. He said "OK, I can see the problem, I will re-issue the ticket." Two minutes of more distinctive music, and he invited me to try again. Same result. He confirmed that we were definitely booked in and had our seat reservations, and suggested that I wait until I get to the airport. "They will help you there." Fine.

Next morning, we were tackled on our way into the Virgin area by a keen young lady who asked if we had had any problems with check in. I said we had, and she led us into what can best be described as a "krall" of check-in terminals, and logged herself into one. This displayed a smart check-in agent’s application, complete with all the logos, the picture of Branson’s glamorous Mum, and so on. She quickly clicked through a set of very similar steps to the ones I had tried, and then click OK. "Oh, that’s odd", she said.

Next, she opens up a green screen application. Well, OK, it’s actually white on a Virgin red background, but I know a green screen application when I see one. She locates my ticket, checks a few things, and types in the command to issue my pass. Now I’m not an expert on Virgin’s IT solutions, but I know the word "ERR" when I see it. "Oh, that’s not right either" says the helpful young lady "I’ll get help".

Two minutes later, the young lady is joined by a somewhat older, rather larger lady. (OK, about the same age as me and she looked a lot better in her uniform than I would, but you get the idea.) "Hello Mr Johnston, let’s see if we can sort this out". She takes one look at the screen and says "We actually have two computer systems, and they don’t always talk to each other or have the same information."

… which could be the best, most succinct summary of the last 25 years of my career I have heard, but I digress…

Back to the story. The new lady looks hard at both applications, and then announces she can see the problem (remember, all this is happening on a screen I can see as well as the two Virgin employees). "Look, they’ve got your name with a ‘T’ here, and no ‘T’ here" (pointing to the "red screen" programme).

Turning to the younger lady, she says "Right, this is how to fix it." "Type DJT, then 01" (The details are wrong, but the flavour is correct…) "Put in his ticket number. Type CHG, then enter. Type in his name, make sure we’ve got the T this time. Now set that value to zero, because this isn’t a chargeable change, and we can do a one letter change without a charge. Put in zero for the luggage, we can change that in a minute. Type DJQ, enter. Type JYZ, enter. OK, that’s better. Now try and print his pass." Back to the sexy new check in app, click a few buttons, and I’m presented with two fresh boarding passes. Job done.

Now didn’t we have a series of books where a bunch of older, experienced wizards taught keen you wizards to tap things with sticks and make incantations? The solution might as well have been to tap the red screen programme with a wand and shout "ticketamus"…

The issues here are common ones. Is it right to be so dependent on what is clearly an elderly and complex legacy system? Are the knowledge transfer processes good enough, or is there a risk that the next time the more experienced lady who knows the magic incantations might not be available? Why is such a fundamental piece of information as the passenger names clearly being copy typed, not part of the automated integrations? As a result, is this a frequent enough problem that there should really be an easier way to fix it? Ultimately the solutions are traditional ones: replace the legacy system, or improve its integrations, but these are never quick or easy.

Now please note I’m not trying to get at Virgin at all. I know for a fact that every company more than a few years old has a similar situation somewhere in the depths of their IT. The Virgin staff were all cheerful, helpful and eventually resolved the problem quickly. However it is maybe a bit of a management error to publicly show the workings "behind the green screen" (to borrow another remarkably apposite magical image, from the Wizard of Oz). We expect to see the swan gliding, not the feet busily paddling. On this occasion it was interesting to get a glimpse, and I was sympathetic, but if the workings cannot be less dependent on "magic", maybe they should be less visible?

Posted in Agile & Architecture, Thoughts on the World | Leave a comment

How Strong Is Your Programming Language?

Line-up at the 2013 Europe's Strongest Man competition
Camera: Canon EOS 7D | Date: 29-06-2013 05:31 | Resolution: 5184 x 3456 | ISO: 200 | Exp. bias: -1/3 EV | Exp. Time: 1/160s | Aperture: 13.0 | Focal Length: 70.0mm (~113.4mm)

I write this with slight trepidation as I don’t want to provoke a "religious" discussion. I would appreciate comments focused on the engineering issues I have highlighted.

I’m in the middle of learning some new programming tools and languages, and my observations are coalescing around a metric which I haven’t seen assessed elsewhere. I’m going to call this "strength", as in "steel is strong", defined as the extent to which a programming language and its standard tooling avoid wasted effort and prevent errors. Essentially, "how hard is it to break?". This is not about the "power" or "reach" of a language, or its performance, although typically these correlate quite well with "strength". Neither does it include other considerations such as portability, tool cost or ease of deployment, which might be important in a specific choice. This is about the extent to which avoidable mistakes are actively avoided, thereby promoting developer productivity and low error rates.

I freely acknowledge that most languages have their place, and that it is perfectly possible to write good, solid code with a "weaker" language, as measured by this metric. It’s just harder than it has to be, especially if you are free to choose a stronger one.

I have identified the following factors which contribute to the strength of a language:

1. Explicit variable and type declaration

Together with case sensitivity issues, this is the primary cause of "silly" errors. If I start with a variable called FieldStrength and then accidentally refer to FeildStrength, and this can get through the editing and compile processes and throw a runtime error because I’m trying to use an undefined value then then programming "language" doesn’t deserve the label. In a strong language, this will be immediately questioned at edit time, because each variable must be explicitly defined, with a meaningful and clear type. Named types are better than those assigned by, for example, using multiple different types of brackets in the declaration.

2 Strong typing and early binding

Each variable’s type should be used by the editor to only allow code which invokes valid operations. To maximise the value of this the language and tooling should promote strong, "early bound" types in favour of weaker generic types: VehicleData not object or var. Generic objects and late binding have their place, in specific cases where code must handle incoming values whose type is not known until runtime, but the editor and language standards should then promote the practice of converting these to a strong type at the earliest practical opportunity.

Alongside this, the majority of type conversions should be explicit in code. Those which are always "safe" (e.g. from an integer to a floating point value, or from a strong type to a generic object) may be implicit, but all others should be spelt out in code with the ability to trap errors if they occur.

3. Intelligent case insensitivity

As noted above, this is a primary cause of "silly" errors. The worst case is a language which allows unintentional case errors at edit time and through deployment, and then throws runtime errors when things don’t match. Such a language isn’t worth the name. Best case is a language where the developer can choose meaningful capitalisation for clarity when defining methods and data structures, and the tools automatically correct any minor case issues as the developer references them, but if the items are accessed via a mechanism which cannot be corrected (e.g. via a text string passed from external sources), that’s case insensitive. In this best case the editor and compiler will reject any two definitions with overlapping scope which differ only in case, and require a stronger differentiation.

Somewhere between these extremes a language may be case sensitive but require explicit variable and method declaration and flag any mismatches at edit time. That’s weaker, as it becomes possible to have overlapping identifiers and accidentally invoke the wrong one, but it’s better than nothing.

4. Lack of "cruft", and elimination of "ambiguous cruft"

By "cruft", I mean all those language elements which are not strictly necessary for a human reader or an intelligent compiler/interpreter to unambiguously understand the code’s intent, but which the language’s syntax requires. They increase the programmer’s work, and each extra element introduces another opportunity for errors. Semicolons at the ends of statements, brackets everywhere and multiply repeated type names are good (or should that be bad?) examples. If I forget the semicolon but the statement fits on one line and otherwise makes syntactic sense then then code should work without it, or the tooling should insert it automatically.

However, the worse issue is what I have termed "ambiguous cruft", where it’s relatively easy to make an error in this stuff which takes time to track down and correct. My personal bête noire is the chain of multiple closing curly brackets at the end of a complex C-like code block or JSON file, where it’s very easy to mis-count and end up with the wrong nesting.  Contrast this with the explicit End XXX statements of VB.Net or name-matched closing tags of XML. Another example is where an identifier may or may not be followed by a pair of empty parentheses, but the two cases have different meanings: another error waiting to occur.

5. Automated dependency checking

Not a lot to say about this one. The compile/deploy stage should not allow through any code without all its dependencies being identified and appropriately handled. It just beggars belief that in 2017 we still have substantial volumes of work in environments which don’t guarantee this.

6. Edit and continue debugging

Single-stepping code is still one of the most powerful ways to check that it actually does what you intend, or to track down more complex errors. What is annoying is when this process indicates the error, but it requires a lengthy stop/edit/recompile/retest cycle to fix a minor problem, or when even a small exception causes the entire debug session to terminate. Best practice, although rare, is "edit and continue" support which allows code to be changed during a debug session. Worst case is where there’s no effective single-step debug support.

 

Some Assessments

Having defined the metric, here’s an attempt to assess some languages I know using it.

It will come as no surprise to those who know me that I give VB.Net a rating of Very Strong. It scores almost 100% on all the factors above, in particular being one of very few languages to express the outlined best practice approach to case sensitivity . Although fans of more "symbolic" languages derived from C may not like the way things are spelled out in words, the number of "tokens" required to achieve things is very low, with minimal "cruft". For example, creating a variable as a new instance of a specific type takes exactly 5 tokens in VB.Net, including explicit scope control if required and with the type name (often the longest token) used once. The same takes at least 6 tokens plus a semicolon in Java or C#, with the type name repeated at least once. As noted above, elements like code block ends are clear and specific removing a common cause of  silly errors.

Is VB.Net perfect? No. For example if I had a free hand I would be tempted to make the declaration of variables for collections or similar automatically create a new instance of the appropriate type rather than requiring explicit initiation, as this is a common source of errors (albeit well flagged by the editor and easily fixed). It allows some implicit type conversions which can cause problems, albeit rarely. However it’s pretty "bomb proof". I acknowledge there may be some cause and effect interplay going on here: it’s my language of choice because I’m sensitive to these issues, but I’m sensitive to these issues because the language I know best does them well and I miss that when working in other contexts.

It’s worth noting that these strengths relate to the language and are not restricted to expensive tools from "Big bad Microsoft". For example the same statements can be made for the excellent VB-based B4X Suite from tiny Israeli software house Anywhere Software, which uses Java as a runtime, executes on almost any platform, and includes remarkable edit and continue features for software which is being developed on PC but running on a mobile device.

I would rate Java and C# slightly lower as Pretty Strong. As fully compiled, strongly typed languages many potential error sources are caught at compile time if not earlier. However, the case-sensitivity and the reliance on additional, arguably redundant "punctuation" are both common sources of errors, as noted above. Tool support is also maybe a notch down: for example while the VB.Net editor can automatically correct minor errors such as the case of an identifier or missing parentheses, the C# editor either can’t do this, or it’s turned off and well hidden. On a positive note, both languages enforce slightly more rigor on type conversions. Score 4.5 out of 6?

Strongly-typed interpreted languages such as Python get a Moderate rating. The big issue is that the combination of implicit variable declaration and case sensitivity allow through far too many "silly" errors which cause runtime failures. "Cruft" is minimal, but the reliance on punctuation variations to distinguish the declaration and use of different collection types can be tricky. The use of indentation levels to distinguish code blocks is clear and reasonably unambiguous, but can be vulnerable to editors invisibly changing whitespace (e.g. converting tabs to spaces). On a positive note the better editors make good use of the strong typing to help the developer navigate and use the class structure. I also like the strong separation of concerns in the Django/Jinja development model, which echoes that of ASP.Net or Java Server Faces. I haven’t yet found an environment which offers edit and continue debugging, or graceful handling of runtime exceptions, but my investigations continue. Score 2.5 out of 6?

Weakly-typed scripting languages such as JavaScript or PHP are Weak, and in my experience highly error prone, offering almost none of the protections of a strong language as outlined above. While I am fully aware that like King Canute, I am powerless to stop the incoming tide of these languages, I would like to hope that maybe a few of those who promote their use might read this article, and take a minute to consider the possible benefits of a stronger choice.

 

Final Thoughts

There’s a lot of fashion in development, but like massive platforms and enormous flares, not all fashions are sensible ones… We need a return to treating development as an engineering discipline, and part of that may be choosing languages and tools which actively help us to avoid mistakes. I hope this concept of a "strength" metric might help promote such thinking.

View featured image in Album
Posted in Agile & Architecture, Code & Development | Leave a comment

Why I (Still) Do Programming

It’s an oddity that although I sell most of my time as a senior software architect, and can also afford to purchase software I need, I still spend a lot of time programming, writing code. Twenty-five years ago people a little older than I was then frequently told me “I stopped writing code a long time ago, you will probably be the same”, but it’s just turned out to be completely untrue. It’s not even that I only do it for a hobby or personal projects, I work some hands-on development into the majority of my professional engagements. Why?

At the risk of mis-quoting the Bible, the answer is legion, for they are many…

To get the functionality I want

I have always been a believer in getting computers to automate repetitive actions, something they are supremely good at. At the same time I have a very low patience threshold for undertaking repetitive tasks myself. If I can find an existing software solution great, but if not I will seriously consider writing one, or at the very least the “scaffolding” to integrate available tools into a smooth process. What often happens is I find a partial solution first, but as I get tired of working around its limitations I get to the point where I say “to hell with this, I’ll write my own”. This is more commonly a justification for personal projects, but there have been cases where I have filled gaps in client projects on this basis.

Related to this, if I need to quickly get a result in a complex calculation or piece of data processing, I’m happy to jump into a suitable macro language (or just VB) to get it, even for a single execution. Computers are faster than people, as long as it doesn’t take too long to set the process up.

To explore complex problems

While I am a great believer in the value of analysis and modelling, I acknowledge that words and diagrams have their limits in the case of the most complicated problem domains, and may be fundamentally difficult to formulate and communicate for complex and chaotic problem domains (using all these terms in their formal sense, and as they are used in the Cynefin framework, see here).

Even a low-functionality prototype may do more to elicit an understanding of a complex requirement than a lot of words and pictures: that’s one reason why agile methods have become so popular. The challenge is to strike a balance, and make sure that an analytical understanding does genuinely emerge, rather than just being buried in the code and my head. That’s why I am always keen to generate genuine models and documentation off the back of any such prototype.

The other case in which I may jump into code is if the dynamic behaviour of a system or process is difficult to model, and a simulation may be a valid way of exploring it. This may just be the implementation of a mathematical model, for example a Monte Carlo simulation, but I have also found myself building dynamic visual models of complex interactions.

To prove my ideas

Part of the value I bring to professional engagements is experience or knowledge of a range of architectural solutions, and the willingness to invoke unusual approaches if I think they are a good fit to a challenge. However it’s not unusual to find that other architects or developers are resistant to less traditional approaches, or those outside their comfort zones. Models and PowerPoint can go only so far in such situations, and a working proof of concept can be a very persuasive tool. Conversely, if I find that it isn’t as easy or as effective as I’d hoped, then “prove” takes on its older meaning of “test” and I may be the one being persuaded. I’m a scientist, so that’s fine too.

To prove or assess a technology

Related to the last, I have found by hard-won experience that vendors consistently overstate the capabilities of their solutions, and a quick proof of concept can be very powerful in confirming or refuting a proposed solution, establishing its limitations or narrowing down options.

A variant on this is where I need to measure myself, or others, for example to calibrate what might or might not be adequate productivity in a given situation.

To prove I can

While I am sceptical of overstated claims, I am equally suspicious if I think something should be achievable, and someone else says “that’s not possible”. Many projects both professional and personal have started from the assertion that “X is impossible”, and my disbelief in that. I get a great kick from bending technology to my will. To quote Deep Purple’s famously filthy song, Knocking At Your Back Door, itself a exploration into the limits of possibility (with censorship), “It’s not the kill, it’s the thrill of the chase.”.

In the modern world of agile development processes, architect and analyst roles are becoming blurred with that of “developer”. I have always straddled that boundary, and proving my development abilities my help my credibility with development teams, allowing me to engage at a lower level of detail when necessary. My ability to program makes me a better architect, at the same time as architecture knowledge makes me a better programmer.

To make money?

Maybe. If a development activity can help to sell my skills, or advance a client’s project, then it’s just part of my professional service offering, and on the same commercial basis as the rest. That’s great, especially if I can charge a rate commensurate with the bundle of skills, not just coding. My output may be part of the overall product or solution or a enduring utility, but more often any development I do is merely the means to an end which is a design, proof of concept, or measurement.

On the other hand, quite a lot of what I do makes little or no money. The stuff I build for my own purposes costs me little, but has a substantial opportunity cost if I could use the time another way, and I will usually buy a commercial solution if one exists. The total income from all my app and plugin development over the years has been a few hundred pounds, probably less than I’ve paid out for related tools and components. This is a “hobby with benefits”, not an income stream.

Because I enjoy it

This is perhaps the nub of the case: programming is something I enjoy doing. It’s a creative act, and puts my mind into a state I enjoy, solving problems, mastering technologies and creating an artefact of value from (usually) a blank sheet. It’s good mental exercise, and like any skill, if you want to retain it you have to keep in practice. The challenge is to do it in the right cases and at the right times, and remember that sometimes I really should be doing something else!

Posted in Agile & Architecture, Code & Development, Thoughts on the World | Leave a comment

The Perfect is the Enemy of the Good

Buddha at Pa-Hto-Thar-Myar Pagoda, camera lying on bag!
Camera: Panasonic DMC-GX8 | Date: 11-02-2017 12:14 | Resolution: 4072 x 5429 | ISO: 400 | Exp. bias: 0.66 EV | Exp. Time: 1.6s | Aperture: 4.5 | Focal Length: 7.0mm | Location: Pa-Hto-Thar-Myar Pagoda | State/Province: Nyaung-U, Mandalay | See map | Lens: LUMIX G VARIO 7-14/F4.0

The Perfect is the Enemy of the Good. I’m not sure who first explained this to me, but I’m pretty sure it was my school metalwork teacher, Mr Bickle. Physically and vocally he was a cross between Nigel Green and Brian Blessed, and the rumour that he had been a Regimental Sergeant Major during the war was perfectly credible, especially when he was controlling a vast playing field full of chatty children without benefit of a bell or megaphone. However behind the forbidding exterior was a kindly man and a good teacher. When my first attempt at enamel work went a bit wrong, and some of the enamel ended up on the rear of the spoon, I was very upset. He kindly pointed out that it was still a good effort, and the flaw "added character". My mother, another teacher, agreed, and the spoon is still on her kitchen windowsill 45 years later.

I learned an important lesson: things don’t need to be perfect to be "good enough", and it’s better to move on and do something else good than to agonise over imperfections.

I also quickly found that this is a good exam strategy: 16/20 in all five questions is potentially top marks, whereas 20/20 in one and insufficient time for the others could mean a failure. The same is true in some (not all) sports: the strongman who is second in every event may go home with the title.

Later, in my training as a physicist and engineer, I learned a related lesson. There’s no such thing as an exact measurement or a perfectly accurate construction. I learned to think in terms of errors, variances and tolerances, and to understand their net effect on an overall result. When in my late 20s I did some formal Quality Management training the same message emerged a different way: in industrial QA you’re most interested in ensuring that all output meets a defined, measureable standard, and the last thing you want is an individual perfectionist obstructing the process.

Seeking perfection can easily lead to a very low (if high quality) output, and missed opportunities. It also risks absolute failure, as perfectionists often have no "Plan B" and limited if any ability to adapt to changing circumstances. "Very good", on the other hand, is an easy bedfellow with high productivity and planning for contingencies and changes.

I adopt this view in pretty much everything I do: professional work, hobbies, DIY, commercial relationships, entertainment. I hold myself and others to high standards, but I have learned to be tolerant of the odd imperfection. This does mean living with the occasional annoying wrinkle, but I judge that to be an acceptable compromise within overall achievement and satisfaction. Practice, criticism (from self and others) and active continuous improvement are still essential, but I expect them to make me better, not perfect.

The trick, of course, is to define and quantify what is "good enough". I then expect important deficiencies against such a target to be rectified promptly, correctly and completely. In my own work, this means allowing some room for change and correction, whether it’s circulating an early draft of a document to key reviewers, or making sure that I can easily reach plumbing pipework. If something must be "set in stone" then it has to be right, and whatever early checks and tests are possible are essential, but it’s much better to understand and allow for change and adjustment.

In the work of others, it means setting or understanding appropriate standards, and then living by them. After I had my car resprayed, I noticed a small run in the paint on the bonnet. Would I prefer this hadn’t happened? Yes. Does it prevent me enjoying my unique car and cheerfully recommending the guys who did the work? No. Professionally I can and will be highly critical of sloppy, incomplete or inaccurate work, but I will be understanding of odd errors in presentation or detail, providing that they don’t affect the overall result or number too many (which is in turn another indicator of poor underlying quality).

So why have I written this now, why have I tagged it as part of my Myanmar photo blog, and why is there a picture of the Buddha at the top? In photography, there are those who seek to create a small number of "perfect" images. They can get very upset if circumstances prevent them from doing so. My aim is instead to accept the conditions, get a good image if I can, and then move on to the next opportunity. At the Pa-Hto-Thar-Myar Pagoda I (stupidly) arrived without my tripod, and had to get the pictures resting my camera on any convenient support using the self timer to avoid shake, in this case flat on its back on my camera bag on the temple floor. Is this the best possible image from that location? Probably not. Am I happy with it? Yes, and if I have correctly understood Buddhist principles, I think the Buddha would approve as well.

It is in humanity’s interest that in some fields of artistic endeavour, there are those who seek perfection. For the rest of us, perfection is the wrong target.

View featured image in Album
Posted in Agile & Architecture, Myanmar Travel Blog, Thoughts on the World, Travel | Leave a comment

Review: Software Design Decoded

66 Ways Experts Think, By Marian Petre, André Van Der Hoek, Yen Quach

A powerful reminder on the behaviours required to succeed in software architecture

This is a delightful little book on the perennial topic of how a software architect should think and behave. While that subject seems to attract shorter books, this one is very concise – the main content is just 66 two-page spreads, with a well-chosen and often thought-provoking illustration on the left, and a couple of paragraphs on the right.  However just as with The Elements of Style, brevity indicates high value: this book provides the prompt, the detail can be elsewhere.

The book should be valuable to many: If you want to be an expert software designer, this book provides an overview of the skills and knowledge you need to develop. If you want to recruit such a person, this provides a set of key indicators and interview prompts. If you are in one of those software development organisations which believes that quality architecture can somehow emerge by magic from the unguided work of undifferentiated coders, this might make you think again.

If you are, or think you are, a software architect, this book should act in the same way as a good sermon: it will remind you of what already know you should be doing, and act as a prompt against which you can measure your own performance and identify areas for improvement. It reminded me that I can sometimes be slow to listen to the views of others, or evidence which may change a design, and slow to engage with new technologies, and I will try to act on those prompts.

This book resolutely promotes the value of modelling in software design. Formal models and analysis have their place, but so do informal models, sketches, and ad-hoc notation. The key point is to externalise ideas so that they can be shared, refined and “tested” in the cheapest and most effective of ways, on paper or a whiteboard. We are reminded that all these are hallmarks of true expert software designers. Code has its place, to prove the solution or explore technicalities, but it is not the design.

The book also promotes the value of richness in these representations. Experts should explore and constantly be aware of alternatives, and model the solution at different levels of abstraction, in terms of both static and dynamic behaviours. Continuous assessment means not only testing, but simulation. If required, the expert should build his or her own tools. While solving simple problems first is a good way to get started, deep, early understanding of the problem space is essential, and experts must understand the whole context and landscape well enough to make and articulate design prioritisations and trade-offs.

I thoroughly recommend this book. It may seem slight, but it delivers a powerful reminder on the process of design, and the necessary, different thought processes to succeed with it.

Categories: Agile & Architecture and Reviews. Content Types: Book, Computing, and Software Architecture.
Posted in Agile & Architecture, Reviews | Leave a comment