10 things to avoid in mobile application testing
Creating mobile applications has never been more popular, but there are plenty of pitfalls for the unwary in this incredibly competitive space. In 2017, consumers downloaded 178.1 billion mobile apps, while in 2018 that figure jumped to 205.4 billion, and by 2022, this figure is projected to grow to 258.2 billion app downloads.
It is not just about volume either, as the mobile sector is developing incredibly rapidly, not only in terms of the evolution of the major platforms themselves, but also device diversity. Different screen sizes, resolutions and button configurations all require specific treatment in an application build, as do different networks (for connectivity and handoff reasons). In addition, network specific firmware and manufacturer firmware versions – in the case of Android – can alter functionality significantly.
Keeping on top of this varied and fast-moving target and staying ahead of the curve is a full-time job, and one that is easily underestimated by those who are not following the space intensively.
We’ve pulled together some learnings from our 100-device-strong Mobile Testing Lab and team of mobile application testing experts to help you navigate the most commonly-recurring pitfalls…
1. Start by testing everything
There is a common misconception, which we encounter quite regularly, that testing is about testing absolutely everything. There isn’t enough time in the world to test all use cases, in all combinations, on every device, on every network in each country.
The key is to take a risk-based approach to testing and start with the most high risk requirements and scenarios first. Of course, time is always a factor in testing, so it is vital to use the time available as effectively as possible.
We deploy a MoSCoW based discipline which utilises a ‘Must test, Should test, Could test, Won’t test’ approach which provides an effective method to filter the requirements into risk-weighted tranches for testing.
2. Testing in a random manner
Exploratory testing can be a very effective weapon in the testing armoury and when used by an experienced testing team can provide excellent results. However there are significant pitfalls to avoid here.
Firstly, there is the obvious potential to get distracted by documenting obvious but non-critical bugs that are high-up in the user journey, which might mean that more mission-critical problems deeper down are missed due to a lack of time.
Secondly, this approach might seem effective initially, but documenting, reproducing and categorising any issues that are encountered can soak up more time than organised testing.
We recommend deploying this approach as part of a wider testing regime, combining both pre-planned structured testing together with bursts of exploratory testing.
3. Ignoring the documentation and design brief
Testing blind can produce interesting results especially when deploying behavioural test techniques (e.g. behaving as a member of the public seeing the app for the very first time) but it is not recommended as the primary approach to testing - you have to be completely aware of how the app is supposed to work, and what outcomes are expected. Indeed, a core part of a good testing consultancy’s or internal test team’s work is to build robust test cases around the requirements, and to work with the client from the very outset to really understand the product and develop relevant requirements if these are not already highly developed.
4. Emulators are the gold standard
Although emulators can help in early testing, they’re just not detailed enough for full user experience testing. There is also the risk that emulators may not behave in a standard manner or may be corrupted by previous client data. There is no substitute for real human experience and behaviour rather than an emulated tester.
In addition, it can take so much time to effectively configure an emulator or an automated test script that it is more time-efficient to undertake the process manually. Another issue is tracking changes and updating the automated tool – it is not a trivial operation.
5. Applying web testing strategies
While a mobile app does have elements in common with a web app, there are a host of specific constraints that set them aside, including battery life, connectivity and navigation. Mobile application testing requires specific strategies that are tailored to the mobile environment and the use case – it’s not just a smaller version of the desktop internet.
6. Siloing results for analysis
Isolating individual test results and trying to analyse them rarely makes sense, as it ignores the rest of the results and context. It is all too easy to get drawn down the rabbit hole, and vital to retain perspective on any issues that are uncovered. How many users are likely to be affected by the problem? How much damage will it do to the client if left unresolved? Listing defects by priority is an essential tool in deciding how to respond to any individual issue, but it has to be considered in context.
7. Real-world connectivity ignored
Mobile devices are expected to work across networks, from GSM to 4G to Wi-Fi and others, which creates numerous opportunities for bugs in handoffs, session handling and the like. This needs rigorous testing in the field, or in a dedicated lab where these situations can be replicated accurately. It is again essential that the requirements around connectivity are carefully structured at the outset, based on the final use case.
8. Failing to test updates and security
Even once you have a stable and well-debugged mobile application, you will need to update it, as the OS is updated, permissions are changed, etc. Regression testing is an essential tool in the testers kit, and it is here that a well-designed testing regime wins out, as the test cases are all ready and documented. Those that have followed an unstructured testing approach will have to start again each time, which is prohibitively expensive.
This update process needs to be tested in the field too, or customers will very quickly be beating down your door with bug reports. Security is of course an essential concern for any digital enterprise, and application developers increasingly so. This can be difficult to test effectively in time-constrained situations, so it is important that best practice is followed across the board.
9. Not listening to the customer
Mobile apps have numerous feedback mechanisms, such as app store ratings, in app feedback menus and customer service call centres. Feedback should be actively observed for valid defect reports, which can be prioritised accordingly. Missing a serious defect could land you in regulatory hot water, fast.
10. Focussing on the UI too much
Although the UI is an important part of the mobile app, it’s far from being the only critical element to get right. Taking a risk-based point approach helps to keep this in focus, rather than getting too distracted by ensuring the colour scheme is entirely on brand, for example.
These are just some of the common pitfalls that can impact on the delivery of a mobile application, but of course there are many variations and other issues that can occur.
While most enterprises are highly pragmatic, the road to success is littered with companies that have tried to move to fast, too soon, and attempted digital transformation and mobile application development that is too advanced to be undertaken in the timescale. Combining misplaced optimism with a few of the pitfalls above is not unheard of, which can result in significant delay to a product launch, as well as spiralling costs. The best overall method to avoid these issues is to plan in a rigorous testing period from the very outset and ensure that your enterprise either has the necessary expertise in house, or that an external specialist testing house is consulted as early as possible in the project.
Back to News