Back to Blog

In a world where almost all the services we depend on are digital, software quality is critically important. The appalling events caused by faults in the Horizon software used by the Post Office and the subsequent legal wrangling over the case should remind us of this.

One of the best sources of detail about this long-running saga is James Christie’s Claro Testing blog, which inspired this post and made me think about bugs in the applications we use in our daily lives, and how we define them.

One paragraph stood out in Christie’s writing about the Horizon affair:

“The definition of a bug was at the heart of the second court case. The Post Office, and Fujitsu (the outsourced IT services supplier) argued that a bug is a coding error, and the word should not apply to other problems. The counsel for the claimants… took a broader view; a bug is anything that means the software does not operate as users, or the corporation, expect.”

These days, when people refer to “bugs”, they are not necessarily referring to a coding error: they could mean a whole range of ways an application does not meet their expectations, from the way it looks to the language and content it contains and simply the general assumptions they have made about the way it “should” behave.

This is why user-centred design (an iterative design process in which designers focus on the users and their needs) is so fundamental to the way we work at Unboxed.

We recently had a conversation at Unboxed about bugs and the way we think about them.

We talked about user expectations and the importance of understanding how the people who rely on our software will use it in their everyday lives. We also discussed the importance of product and design professionals thoroughly understanding end user requirements so that we don’t introduce bugs into our system before even a single line of code is written.

We also talked about how we need to think about the type of data our users will put into the system, how we will communicate with them when something goes wrong, and how to make their experience of using our application generally pleasant instead of a frustrating ordeal.

Software has changed - and so has the way we develop it

These ideas are more important than ever because software applications have dramatically changed from the days when they were essentially a closed system sitting in the back office of a bank or government institution, with very little communication with the outside world. These days we have risks surrounding product quality that simply didn’t exist in the past.

  1. Most applications use third-party libraries. What happens when one of these fails or is updated?
  2. Ideas and language can change during the lifetime of our application. Does our software reflect these changing principles?
  3. If we open-source a product, how can we ensure that third-party operators use it responsibly?
  4. Generative AI is like flying blind because we don’t always understand its sources
  5. Anything that learns from user inputs is vulnerable to people with bad intentions

"We need to think about the type of data our users will put into the system, how we will communicate with them when something goes wrong, and how to make their experience of using our application generally pleasant instead of a frustrating ordeal"

Quality engineering past and present

As applications become more complex and more interwoven with other systems, our approach to software quality has evolved. Some systems, especially those with components that are easily isolated, can still be tested in the traditional manner, by teams of software testers working through test scripts that are derived from product requirements.

However, most systems require an approach to quality that is more holistic and which takes into account many different facets of the system under test. Organisations might adopt a range of techniques to improve and assure quality, including:

  • Testers adopting a “shift left” approach and testing the requirements rather than the code
  • Encouraging all team members to embrace a “whole-team” approach to product quality
  • Continuous deployment, enabling the rapid iteration of features and fixes, with the confidence that automated tests are highlighting regressions in the code
  • Rapid feedback from users, which is listened to and used as a basis for further improvements
  • Observability tools to show us how our application is being used and how it is performing

Finally, it is well worth bearing in mind this quote from James Christie’s blog post linked below:

“User error” is an inadequate explanation for things going wrong. If the system doesn’t help users avoid error, then that is a system failure.

Read more from James Christie at: https://clarotesting.wordpress.com

Continue reading

2.1 Read Learning visiontypes
Martyn Evans 8 mins

Prototyping to reduce risk

project_notes_on_glass_001 (1)
Martyn Evans 6 mins

When it’s ok to skip Discovery

Visual questions for user testing
Jean Watanya 3 mins

How a cancelled project became a success