Mobile Testing On Real Devices Vs. Emulators

Though it seems the debate over the importance of testing on real devices and basing a Go/No-Go release decision only on real devices is over i am still being asked – why it’s important to test on real devices? What are the emulators limitation?

In this blog i will try to summarize some key points and differences that might help address the above questions.

emulatorslimitations

End users Use Real Devices and Not Emulators

Developing and deploying a mobile app to the market isn’t intended to be used on desktops with mouse and keyboards but on real devices with small screens, limited hardware, RAM, storage and many other unique attributes. Testing on a different target then the end-users will use simply exposes organizations to quality risks, security, performance and others.

The end-users engage with the application with unique gestures like TouchID, Force Touch, Voice commands. End-users operate their mobile apps in conjunctions with many other background apps, system processes — These conditions are simply either hard to mimic on emulators or are unsupported by emulator.

As seen also in the above visual, Emulators don’t carry the real hardware as a real device would – this includes chip-set, screen, sensors and so forth.

Platform OS Differences

Mobile devices are running a different OS flavor than the one that runs on Emulators. Think about a Samsung device or other launched by Verizon, T-Mobile, AT&T and other large carriers – these platform versions that run on the devices are by far different than the ones that run on Emulators.

Thinking about devices and carriers, note that real devices receive plenty of notifications like push notification, location alerts, incoming text messages (whats app etc.), google play store/app store app updates and so forth –> these are not relevant in Emulators and by not testing in these real environment conditions, the test coverage is simply incomplete and wrong.

real_env_conditions

The above image was taken actually from my own device when i was travelling to New York last week – look at the amount of background pop-ups, notifications and real conditions like network, locations, battery while i simply use the Waze app. This is a very common scenario for most end users that consume any mobile app. There is no way to mimic all of the above scenarios on Emulators in real-time, real network conditions etc.

Think also on varying network condition simulation that transition from Wifi to real carrier network, than add lose of network connection at all that impact location, notifications and more.

Wasting a lot of time in testing against the wrong platforms costs money, exposes risks and is inefficient.

Innovative Use Cases Simulation

With the recent Mobile OS platforms that were recently released to the market including Android 7.1.1 and iOS 10.x we start to see a growing trend of apps that are being used in different contexts.

appshortcuts

With Android 7.1.1 we now see App-Shortcuts (above image) that allows developers to actually create a shortcut to a specific feature of the application. This is already achievable with iOS 9 force-touch capabilities. Add to these use cases like iMessage Apps that were introduced in iOS10, Split window in Android 7.0 and you understand that an app can be engaged by the user either through part of the screen or within a totally different app like iMessage.

With such complexities the test plans for once are getting more fragmented across devices and platforms but the gaps between what an Emulator can simply provide developers and testers and what a real device in a real environment can is growing.

Bottom Line

Developers might find at a given stage of the app value of using Emulators and i am not taking that away – testing on an Emulator within the native IDE’s in early stages is great, however when thinking about the complete SDLC, release criteria and test coverage there is no doubt that real devices are the only way to go.

Don’t believe me, ask Google – https://developer.android.com/studio/run/device.html

google

Happy And REAL Device Testing 🙂

3 Ways to Make Mobile Manual Testing Less Painful

With 60% of the industry still functioning at 30% mobile test automation it’s clear that manual testing is taking a major chunk of a testing team’s time. As we acknowledge the need for both manual and automation testing, and without drilling down into the caveats of manual testing, let’s understand how can teams can reduce the time it takes, and even transition to an automated approach to testing.

1. Manual and Automation Testing: Analyze Your Existing Test Suite

You and your testing team should be well-positioned to optimize your test suite in one of the following ways:
– Scope out irrelevant manual tests per specific test cycles (e.g. not all manual tests are required for each sanity or regression test)
– Eliminate tests that consistently pass and don’t add value
– Identify and consolidate duplicate tests
– Suggest manual tests that are better-suited for automation (e.g. data driven tests or tests that rely on timely setup of environments)

The result should be a mixture of both manual and automation testing approaches, with the goal of shifting more of the testing toward automation.

manual and automation testing mix

2. Consider a Smooth Transition to “Easier” Test Frameworks

In most cases, the blocker for increasing test automation lies inside the QA team, and is often related to a skills gap. Today’s common mobile test automation tools are open-source, and require medium to high-level development skills in languages such as Java, C#, Python and Java Script. These skills are hard to find within traditional testing teams. On the other hand, if QA teams utilize alternatives such as Behavior Driven Development (BDD) solutions like Cucumber, it creates an easier path toward automation by virtue of using a common language that is easy to get started and scale.

manual and automation testing cucumber behavior driven development

3. Shift More Test Automation Inside the Dev Cycle

When thinking about your existing test automation code and the level of code coverage it provides, there may be a functional coverage overlap between the automated and manual testing. If the automation scripts are shared across the SDLC and are also executed post-commit on every build, this can shrink some of the manual validation work the testing team needs to do. Also – and miraculously related! – by joining forces with your development and test automation teams and having them help with test automation, it will decrease workload and create shorter cycles, resulting in happier manual testers.

Bottom Line:

Most businesses will continue to have a mix of manual and automation testing. Manual testing will never go away and in some cases, it is even a product requirement. But as you optimize your overall testing strategy, investing in techniques like BDD can make things much easier for everyone involved with both manual and automation testing throughout the lifecycle.

manual automation testing with real devices