21 March, 2018

How Much Testing is Enough?

Technology has evolved and is continuing to evolve in such a way that we are becoming increasingly dependent on it for our day to day lives. Software Code is now present in almost all devices we use, such as games, mobile phone apps and electronic control modules in cars.

You would like to think a manufacturer has spent time testing the product before it is declared “fit for release”, however, as Testers, I’m sure we have all been in a situation where a business just cannot (or will not) delay a project for various reasons. The Project Lead may have taken a risk based decision, and considered that all the previous software versions launched successfully without any issues. This approach may work 99% of the time, however what happens the 1%.

You may have seen the TV sketch Burnistoun. If you haven’t, I would recommend watching the 'Elevator' sketch. Two Scottish businessmen enter a lift (for ease we will call them Hamish and Wallace). Hamish boasts to Wallace that the company just installed a voice activated elevator. Straight away Wallace is sceptical and comments: “They've installed voice recognition, in a lift, in Scotland...have you ever tried voice recognition technology? They don’t do Scottish accents!” As a Scottish person, I concur with this statement, voice recognition is not made for the Scottish accent. 


How Much Testing Is Enough


Hamish and Wallace are standing in an elevator that has no buttons to over-ride the voice command.  Hamish ignores Wallace’s scepticism and says, “eleven”. The result? Yes, you guessed it, the elevator cannot understand Hamish. Both Hamish and Wallace attempt to use different accents and quickly become frustrated as they are still unable to get the elevator moving, or get the doors opened to leave. If this had been reality, would they ever have used the services again? More likely they would broadcast their bad experience to friends and family on social media sites, resulting in a negative image for the company. 

This begs the questions, what testing was done and was it enough? We are assuming the company who developed the software did carry out some degree of testing. 

So, how much testing is sufficient? There is no written rule. According to BCS/ISTQB Software Testing Foundation, you cannot physically test for every scenario. When deciding how much testing you should carry out, you may want to consider the level of risk involved, including technical and business risk and even budget or time constraints. The management team of the elevator project may have taken a risk based approach to testing. They may have identified Scotland as less likely to buy the product, and therefore, is a lower priority. During the planning cycle, did they consider an override facility to accommodate users who may be have a speech impediment, or hearing impairment, and will the system cope with the different dialects of accents? I know this is only a sketch and the likelihood of this product being developed is low, but we should not ignore the importance of quality testing, rather than quantity of tests carried out.

A company may spend a lot of time, money and effort testing a product, but may still have defects that are only found within different environments after the product is launched. Who is to blame - the testers, developers or is it one of those conditions that were not considered during development? There is no easy answer, however going through a product risk review with the whole team should highlight potential major issues. I would like to think if the voice activated elevator was ever developed in real life, the ability to override the voice command was a feature. Well we can hope…… 


By Cameron Farooq, Test Analyst at Edge Testing

Back to Blog