Optimizing Android Test Automation Development

Now that we are a few weeks away from Google I/O, and we understand that the complex Android landscape is becoming, even more, complex let’s explore a way Android teams can optimize and plan their test automation across the different platforms and devices.

In the past, I’ve written about the need to connect the 3 layers:

  • Application under test
  • Test code itself
  • Device/OS under test

I related back to my old patent that I jointly submitted years ago in the days of J2ME and also wrote a chapter about it in my newly published book (The Digital Quality Handbook)

Problem Definition

Android OS families support different capabilities and the gap is growing from one Android SDK to the next. As an example, Android devices older than 6.0, cannot support Android Doze for battery usage optimization, or cannot support App Shortcuts (see below example from Google Photos app). These diffs introduce a challenge to Dev and test team that innovate and take advantage of these features since the test code that shall run against these features and devices needs to be turned only towards devices can actually support it.

How can teams sustain a test automation suite that runs specifically on the right devices per supported features?

Proposed Approach

While I don’t have a bulletproof, magic pill to address all challenges that may occur as a result of the above problem, I can surely recommend an approach as described below.

Important to note, that being aware of the problem, is a step toward resolving it 🙂

Assess Your App and DUT:

  • Map the different features that your app supports or requires the users to grant permissions for
  • Examine your device test lab and filter the devices that support and does not support these specific features

To manage the above, teams can leverage the following:

  • Use an existing ADB command that extracts from a connected device/s the supported feature
    • ADB SHELL
      • PM LIST FEATURES

After running the above command, you will get an output that looks like the below …

Compare The Outputs

Once you know your DUT’s capabilities, as well as your App, features to be tested, you can run a simple output comparison and see what can and can’t be tested – From that point, the optimization should be mostly manual – you will setup your test execution and CI in the lab accordingly. While it isn’t simple enough, it still offers a sustainable approach + awareness to both dev and test team that can be useful throughout the development, debugging and testing activities. In the below visual you can see a capabilities diff between a Samsung Note 5/Android 7.0 (left column) and an older  Samsung running Android 5.x device capabilities (right column). An immediate diff out of a larger list that I have shows the fingerprint functionality that is supported on the Note 5 but not on the other Samsung device. Such insight should be used when planning the feature testing across these 2 devices (this is just one example).

Bottom Line

As Google continues to innovate and add more features, the existing devices and test framework will find it hard to close the gaps and that’s a challenge that teams need to be aware of, plan for, and optimize so their release vehicles and velocity remain solid.

Happy Optimization!

How the “Digital Quality Handbook” Was Born

Travel back with me… to late September 2016. It’s the Jewish New Year, and I am in Boston, MA. As I celebrate the passing of another great year, I think to myself, “After being in the software quality space for nearly 20 years, isn’t it about time that I reach out to the community of thought leaders and influencers and create an asset that can fill a gap in the market that we can give back to the world?” A book. A practical book. A “how-to” for DevOps practitioners, designed to make them better, faster, and more… perfect(o).

You see, when it comes to assuring the quality of web, mobile and IoT apps, the market is still struggling with key questions around test coverage, automation best practices, optimization of test automation suites, accomplishing more tests within the pipeline of software build cycle, the practice of shifting left and much, much more.

So, while my wife and children continued celebrating in the next room, I immediately (right then and there) started writing the intro to a book that would, eventually, bring together actionable ideas and practices from many of the world’s most recognized experts, thought leaders and influencers in the area of software quality.

To make it easier to both develop and consume the content, the book is set out in four logical sections:

  1. Introduction to continuous quality and the digital space
  2. Advanced test automation practices
  3. Achieving DevOps maturity in the digital era
  4. Expanding quality coverage with UX and non-functional testing

If you’re reading this article close to its post date, I’m currently down in Orlando, participating in a book signing at the StarEast testing conference. Danny McKeown from Paychex, one of the technical reviewers of the book, is with me, both participating in the signing and speaking at the event.

To name the market leaders who took part and contributed to this book:

  1. Microsoft (Donovan Brown)
  2. Applitools (Adam Carmi)
  3. TestFairy (Yair Bar-On)
  4. Applause (Doron Reuveni)
  5. CA & BlazeMeter (Jonathon Wright, Noga Cohen, Jacob Sharir)
  6. InfoStretch (Manish Mathuria)
  7. Rabobank (Wim Selles)
  8. Utopia Solutions (Lee Barnes)
  9. Angie Jones
  10. Jean Ann Harrison
  11. Lior Kinsbruner

And from Perfecto:

  1. Amir Rozenberg
  2. Roy Nuriel
  3. Paul Bruce
  4. Chris Willis
  5. Uzi Eilon
  6. Yoram Mizrachi
  7. Roi Carmel

Without this crew of contributors, the book wouldn’t be what it is today. Some of the contributed content includes:

  • The best way to include visual analysis testing as part of your test code, using any available open-source framework
  • How to develop API tests that complement your mobile UI test automation
  • How to include non-functional performance testing and UX as part of your overall test strategy
  • How to extend open-source tools like Protractor to better test your hybrid app
  • The bible of UX testing
  • What a valid and high-ranked XPATH should look like (with link to an online free tool that provides that rank to you)
  • How to include chatbots testing into your existing mobile testing plans
  • Where in the overall SDLC strategy does crowdsource testing and beta testing fit

Fun fact: We launched the book on Amazon on March 3rd. On March 5th, at approximately 2:51pm Eastern Time, the book had been added to the Hot New Releases in the Software Testing sidebar, and made it to the #1 Bestseller slot in that same category. We took a screen shot. It really happened!

To get your own copy of the book, please refer to this URL – and if you find it valuable, feel free to share your feedback with me.

Happy Reading!

Recent Web Browser Quality Related Innovations

Yea, I know that my blog title is mobiletestingblog, but that’s not a mistake in the title 🙂

There is no distinction anymore around which platform is used to consume content today, whether it’s a smartphone, tablet or a desktop browser when it comes to web apps.

If your company is developing a web app or responsive website, these sites ought to be tested thoroughly against all of the above platforms. The majority of web traffic BTW today is coming from mobile devices.

In general, it is good to know that from a desktop browser market share, there are less familiar players such as UC browser by Alli-baba and Samsung Internet browsers that hold a nice chunk of the market (globally) – so, avoiding them as  part of your test coverage matrix might not be a good strategy.

Sourcehttp://gs.statcounter.com/browser-market-share

In general, the below would be the formula for web testing that I would recommend these days, however if from a web traffic analysis and supported geographies you have a requirement to target China, Europe, and others – then the above metrics should be added to the mix either in addition to the below, or as an adjustment.

With that in mind, I wanted to highlight in this post some recent web specific tools that are out there, free and can be extremely useful for both developers and testers.

In Google Chrome 59 (Beta is already available today!), Google is introducing new code coverage built-in tools that can allow both developers and testers to record the screen activities and report back in a nice dashboard how much of the site content (javascript, and more) was actually executed in an aim to optimize the website quality, performance and much more.

From a user perspective, they only need to enable the Code Coverage option from within the developer tools in Chrome, so it is added under the Sources menu option as seen in the below

Once that is done, simply start capturing the code coverage by clicking on the Record button to get an output like the below – simple, valuable, and unfortunately only available as free and built-in solution within this browser compared to FireFox/Safari and others 😦

I went and used this new tool on Geico.com responsive site and nearly completed the most common transaction of querying for a new car insurance. At the end of the recording, i received the below chart that as you’ll see – shows a usage of only ~60% of the site JavaScript code in this journey.

When drilling down deeper to a specific .JS source file, you can see a highlighted source with Green/Red where it is actually used and unused – this is what your web developers need to see and optimize wisely.

Let’s see a key feature that was recently introduced in FireFox also, and cab be useful for both Dev and testers.

2 weeks ago, Mozilla released FireFox 53 that is their 1st step in a new project called Quantum, that aims to enhance performance, stability and more.

Among the innovations in that release are compact themes, usability features like reading time for the page, new permission model (see below), faster performance and few other bug fixes for stability.

 

Detailed release notes on FF 53 can be found here: https://www.mozilla.org/en-US/firefox/53.0/releasenotes/.

In addition to the newly introduced features, and if you’re not aware of – FireFox offer quite useful developer tools including object inspector, performance monitor, debugger and network monitor that can also enhance your overall web Dev and Test activities (see example below)

Performance Monitoring Tools From Within FireFox Developer Tools

Network Monitoring Options From Within FireFox Developer Tools

Bottom Line

With Chrome and FireFox being the leading Desktop and Mobile browsers, it is very important for web teams to continuously monitor the early releases from Google and Mozilla, and as the 1st Beta or Dev branches are available to validate – Do It. This can not only reveal earlier regressions but might also as mentioned in this blog, offer you some new productivity tools that can increase the value to your overall Dev and Test activities.