A Frontend Web Developer’s Guide To Testing – Book Summary

In April 2022, I have published my 4th book which is 100% focused on how frontend web application developer’s can leverage the wide plethora of test automation frameworks that continuously evolve and provide more and more capabilities.

The book is available on Packt website (my publisher) as well as on Amazon and other book stores globally. I got great feedback so far from the community both on the importance of this book to practitioners, as well as the specific content. The book was reviewed by Bruno Bosshard with the foreword written by Gleb Bahmutov who’s one of the core leaders of Cypress in the marketplace.

Book Structure

The book consists of 3 main sections:

Section 1: Fundamentals of Web App Testing

This sections has the following main chapters and it offers structured approach to building a solid testing strategy across all methodologies – Exploratory, Functional, Performance, API, Accessibility, and more.

  1. Cross-Browser Testing Methodologies
  2. Challenges Faced by Frontend Web Application Developers
  3. Top Web Test Automation Frameworks
  4. Matching Personas and Use Cases to Testing Frameworks
  5. Introducing the Leading Frontend Web Development Frameworks

Section 2: Continuous Testing Strategy for Web App Developer’s

This section provides an overview of the criteria that frontend web application developers should look for when choosing a test automation framework, and specifically look into test coverage strategy for web apps.

  1. Map the Pillars of a Dev Testing Strategy for Web Applications
  2. Core Capabilities of the Leading JavaScript Test Automation Frameworks
  3. Measuring Test Coverage of the Web Application

Section 3: Frontend JavaScript Web Test Automation Framework Guides

The final section of the book dives deeper into the features and differences across the 4 leading web application testing frameworks, and concludes the section with an overview of some low-code testing tools that are derived from some of these testing frameworks.

  1. Working with the Selenium Framework
  2. Working with the Cypress Framework
  3. Working with the Playwright Framework
  4. Working with the Puppeteer Framework
  5. Complementing Code-Based Testing with Low-Code Test Automation

Overview of What’s Changing within Web Application Testing?

While writing the book i already became aware of few main trends that in my opinion will shape the future of the testing practice over the next few years.

  • Leveraging CDP to enhance test automation coverage, auditing of performance, network traffic and accessibility.
    • Selenium 4 added support for this rich protocol
    • Playwright and Puppeteer are build on top of CDP
    • Cypress integrates with CDP to benefit from its core features
  • Introduction of the modern concept of Component Testing!
    • Cypress version 10 officially supports Component Testing that allows isolation of a web application component for more rigor and focused testing.
    • Latest Playwright release also starts referring to component testing
Cypress Component Testing of React App (source: Cypress Blog)
  • Built-in low-code abilities within code-based testing frameworks
    1. Selenium 4 introduces a revamped version of the Selenium IDE
    2. Playwright offers its Code-Gen test recorder
    3. Cypress integrates with the Chrome browser recorder
  • Community contributions and Plugins!
    • Open source software can only grow through its communities and the level of engagement and support that such tools receive. With all of the above frameworks we see tremendous communities that provide real-time support through slack, gitter, Discord, GIT code sample, and a lot of customized plugins. With these plugins, test frameworks like Cypress, selenium, playwright and puppeteer enhances their features to cover visual testing, integrate with CI servers, accessibility testing, code coverage, API testing, CDP integration, and much more.

Bottom line

I do hope that this book will provide value to any frontend web application developers and test automation engineers, and serve them for the coming years. The digital transformation continues to evolve with modern web apps like progressive web apps (PWAs), responsive web (RWD), Flutter, and others. With such mature testing tools, practitioners are in a great place today to cover many of the sophisticated use cases, and eliminate bugs earlier in their software iterations.

Happy Testing!

Resolving The Quality Visibility of Continuous Testing Across The DevOps Pipeline Environments

Guest Blog Post by: Tzvika Shahaf, Director of Product Management at Perfecto & A Digital Reporting and Analysis Expert

Intro

One of the DevOps challenges in today’s journey for high product release velocity and quality, is to keep track of the CI Jobs/Test stability and health over different environments.

Basically, this is one of the top reasons, bugs slips to production. In other words, lack of visibility into the DevOps delivery pipeline.

Real Life Example

Recently, I’ve met a director of DevOps in a big US-based enterprise who shared with me one of his challenge in this respect.

At the beginning of our meeting he indicated that his organization’s testing activity lacks a view for the feature branches that are under each team’s responsibility.  This gap creates blind spots, where even the release manager is struggling to assemble a reliable picture of the quality status.

The release manager and the QA lead are responsible to verify that after each and every build cycle execution, the failures that occurred are not critical and accordingly approve the merge to a master branch (while also issuing a defect in Jira for bugs/issues that weren’t fixed). The most relevant view for these personas is a suite level list report as the QA lead is still busy with drill down activity to the individual test report level as he is interested also in understanding the failure root cause analysis (RCA).

As part of the triage process, the team is looking to understand the trending history of a Job to see under each build what was the overall test result status. The team is mainly interested in understanding if the issue is an existing defect or a new one. In addition, they look for an option to comment during the triage process (think about it as an offline action taken post the execution).

Focusing on the Problem

So far so good, right? But here’s the problem: the work conducted by each team is not siloed in the teams CI view. There’s no data aggregation to display an holistic overview of the CI pipeline health/trend.

Each team is working on different CI branch and the other teams during the SDLC have no visibility to what happened before/happens now.

Even when there’s a bug, the teams will be required to issue the defect to a different Jira project – so the bug fixing process adding more inefficiency, hence time to the release process.

When the process is broken as identified above, each new functionality on the system test level  or stage, is being merged while not all failures are being inspected (lack of visibility from within the CI).

Jenkins will ignore the system test results and merge even if there’s a failure.

The Right Approach to Solving The problem

The desired visibility from a DevOps perspective is to cover the Jenkins Job/build directly form the testing Dashboard across all branches in order to understand what was changed in that specific build that failed the tests:  which changes in the source code were made? Etc.

What if these teams had a CI overview that capture all testing data that includes:

  1. Project name & version
  2. Job name /Build number
  3. Feature Branch name or Environment description (Dev, Staging, Production etc.)
  4. Scrum Team name  – Optional
  5. Product type – optional

Obviously, Items 1-3 are a MUST in such a solution, since when displayed in the dashboard UI, teams gain maximum visibility into the relevant module the user is referring to when a bug is found,  and this is also part of the standard DevOps process as well. Double clicking on the visibility and efficiency point,  such options can significantly narrow the view of the entire dashboard for each team lead/member , and help them focus only on their relevant feature branches.

When the QA lead reviews the CI dashboard he/she can mark a group of tests and name the actual failure reason, that can sometimes be a bug, a device/environment issue or other.

Feel free to reach out to me if you run into such issues, or have any insights or comment on this point of view – Tzvika’s Twitter Handle

Thank You

Tzvika Shahaf

Eliminating Mobile Test Automation Flakiness and More

Mobile testing by definition is an unstable, flaky and unpredictable activity.

When you think you covered all corners and created a “stable” environment, still, your test cycle often get stuck due to 1 or few items.

In this post, I’ll try to identify some of the key root causes for test automation flakiness, and suggest some preventive actions to eliminate them.

What Can Block Your Test Automation Flow?

From an ongoing experience, the key items that often block test automation of mobile apps are the following:

  • Popups – security, available OS upgrades, login issues, etc.
  • Ready state of DUTs – test meets device in a wrong state
  • Environment – device battery level, network connectivity, etc.
  • Tools and Tets Framework fit – Are you using the right tool for the job?
  • Use of the “right” objects identifiers, POM, automation best practices
  • Automation at Scale – what to automate, on what platforms?

All of the above contribute in one way or the other to the end-to-end test automation execution.

We can divide the above 6 bullets into 2 sections:

  1. Environment
  2. Best Practices

Solving The Environment Factor in Mobile Test Automation

In order to address the test environment contribution to test flakiness, engineers need to have full control over the environment they operate in.

If the test environment and the devices under test (DUT) are not fully managed, controlled and secured the entire operation is at risk. In addition, when we state the term “Test Environment Readiness” it should reflect the following:

  1. Devices are always cleaned up prior to the test execution or are in a known “state”/baseline to the developers and testers
  2. If there are repetitive known popups such as security permissions, install/uninstall popups, OS upgrades, or other app-specific popups, they should be accounted for either in the pre-requisites of the test or should be prevented proactively prior to the execution.
  3. Network stability often is a key for unstable testing – engineers need to make sure that the devices are connected to WiFi or cellular network prior to testing execution start. This can be done either as a pre-requisite validation of the network or through a more generic environment monitoring.

 

Following Best Practices

In previous blogs, I addressed the importance of selecting the right testing frameworks, IDEs as well as leveraging the cloud as part of test automation at scale. After eliminating the risks of the test environment in the above section, it is important to make sure that both developers and test automation engineers follow proper guidelines and practices for their test automation workflow.

Since testing shifted left towards the development team, it is important that both dev and test align on few things:

  1. What to automate?
  2. On what platforms to test?
  3. How to automate (best practices)?
  4. Which tools should be used to automate?
  5. What goes into CI, and what is left outside?
  6. What is the role of Manual and Exploratory testing in the overall cycle?
  7. What is the role of Non-Functional testing in the cycle?

The above points (partial list)  covers some fundamental questions that each individual should be asking continuously to assure that his team is heading in the right direction.

Each of the above bullets can be attributed to at list one if not many best practices.

  1. To address the key question on what to automate, here’s a great tool (see screenshot below) provided by Angie Jones. In her suggested tool, each test scenario should be validated through some kind of metric that adds to a score. The highest scored test cases will be great candidates for automation, while the lowest ones obviously can be skipped.
  2. To address the 2nd question on platform selection, teams should monitor their web and mobile ongoing traffic, perform market research and learn from existing market reports/guides that addresses test coverage.
  3. Answering the question “How to automate” is a big one :). There is more than 1 thing to keep in mind, but in general – automation should be something repetitive, stable, time and cost efficient – if there is something preventing one or more of these objectives, it’s a sign that you’re not following best practices. Some best practices can be around using proper object identifiers, others can reflect building the automation framework with proper tags and catches so when something breaks it is easy to address, and more.
  4. The question around tools and test frameworks again is a big one. Answering it right depends on the project requirements and complexity, the application type (native, web, responsive, PWA), test types (functional, non-functional, unit). In the end, it is important to have a mix of tools that can “play” nicely together and provide unique value without stepping on each other.
  5. Tests that enter CI should be picked very carefully. Here, it is not about the quantity of the tests but about the quality, stability, and value these tests can bring focusing on fast feedback. If a test requires heavy env. setup is flaky by nature, takes too much time to run, it might not be wise to include it in the CI.
  6. Addressing the various testing types in Qs 6 and 7 – it depends on the project objectives, however, there is a clear value for each of the following tests in  mobile app quality assurance:
    1. Accessibility and performance provide key UX indicators about your app and should be automated as much as possible
    2. Security testing is a common oversight by many teams and should be covered through various code analysis, OWASP validations and more.
    3. Exploratory, manual and crowd testing provide another layer of test coverage and insights into your overall app quality, hence, should be in your test plan and divided throughout the DevOps cycle.

Happy Testing

Getting Started With Headless Browser Testing

[Guest Blog by Uzi Eilon, CTO, Perfecto]

The “shift left” trend is actually happening, developers as part of the DevOps pipeline need to test more and add more automation testing in order to release faster.

In addition, those tests are almost the last barrier before production, because the traditional testing is going away.

In such case, the standard unit tests are not good enough, and the E2E tests are complicated and require a longer time of setup and prepare.

This is the reason both Google and Mozilla released new JS headless browsers to help their developers to execute automation tests.

The same happened in the mobile area where Apple and Google released the XCUItest and Espresso.

Headless browsers provide the following capabilities in order for the developers to use it:

  • Same language , same IDE , same working environment:
    Most of the web develops work with  JS so these browsers are JS platform , to add new test you should open new class and write standard JS code.
  • FAST Feedback & Execution
    These tests need to be executed fast (sometime every commit) , these browser reduce the UI and rendering “noise” connect to the element directly  and run very fase.
  • Easy to setup
    Developers time is expensive, and developers will not add complicated processes for test , the setup of the tools is a simple npm installation.
  • Access to all the DevTools capabilities
    Developers need more details , these tools give access to all the DevTools data includes accessibility, network, log , security and more.
    Smart tests can be very powerful and cover not only the functionally but also the efficiency

In order to understand more I played with Puppeteer and I’m happy share my thoughts with you.

Installation

Very simple

npm i –save puppeteer

Documentation

Not a lot of examples or discussions about specific issues but I did find the API documentation that contains everything I was looking for.

Objects identification

Intuitive – Same way I connected to my object via any JS  .

Example:

  • by id : page.type(‘#firstName’,‘Uzi’);·
  • by class page.type(‘.class,‘Uzi’);

Sync and waiting for elements

In this case, I have to admit I struggled with the standard wait for navigation command, it was not stable:

await page.waitForNavigation({waitUntil:‘load’})

at the end I used the following :

await page.waitForSelector(‘#firstName’,{visible:true}).then(()=>
{      // do the actions per page
page.screenshot({path: ‘then.png’,fullPage: false})
});

 

UI

As part of my test I tried to verify the screen by taking a screenshot, I liked the way I could change the browser UI capabilities and configure my page:

const fullScreen = {
deviceScaleFactor: 1,
hasTouch: false,
height:  2400 ,
isLandscape: false,
isMobile: false,
width: 1800,
fullPage: true
};
page.setViewport(fullScreen)

 

Other devOpts options:

it is very easy to use, for example I would like to see all my links in a frame

for (let child of frame.childFrames())
{
dumpFrameTree(child, indent + ‘  ‘);
}

 

Summary:

Using the headless browser like Puppeteer was very easy and intuitive, it felt natural to add it as part of my testing code.

In addition, setting up the headless browser environment and executing was very simple and fast.

On the less convenient point, what I found was that to get the results directly into the CI, one should add more scripting code or use other executions methods.

Lastly, this method still ramps up, hence has some small bugs in few features and also lacks documentation and more samples, however, for an early testing tool for white-box/unit testing, it is very promising and well-positioned to complement tools such as Selenium. AS a matter of fact, what I also saw, is that other browser vendors are taking the same approach and investing in headless browsers – Mozilla, Microsoft.

 

P.S: If you want to learn more about the growing technologies and trends in the market, I encourage you to follow My podcast with Uzi Eilon called Testium (Episode 6 is fully dedicated to this subject)

Continuous Testing Principles for Cross Browser Testing and Mobile Apps

Majority of organizations are already deep into Agile practices with a goal to be DevOps and continuous delivery (CD) compliant.

While some may say that maximum % of test automation would bring these organizations toward DevOps, It takes more than just test automation.

To mature DevOps practices, a continuous testing approach needs to be in place, and it is more than automating functional and non-functional testing. Test automation is obviously a key enabler to be agile, release software faster and address market events, however, continuous testing (CT) requires some additional considerations.

Tricentis defines CT accordingly:

CT is the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release candidate as rapidly as possible. It evolves and extends test automation to address the increased complexity and pace of modern application development and delivery

The above suggests that a CT process would include a high degree of test automation, with a risk-based approach and a fast feedback loop back to developer upon each product iteration.

How to Implement  CT?

  • A risk-based approach means sufficient coverage of the right platforms (Browsers and Mobile devices) – such platform coverage eliminates business risks and assures high user-experience. Such platform coverage is continuous maintenance requirements as the market changes.
  • Continuous Testing needs an automated end-to-end testing that integrates existing development processes while excluding errors and enabling continuity throughout SDLC. That principle can be broken accordingly:
    • Implement the “right” tests and shift them into the build process, to be executed upon each code commit. Only reliable, stable, and high-value tests would qualify to enter this CT test bucket.
    • Assure the CT test bucket runs within only 1 CI –> In CT, there is no room for multiple CI channels.
    • Leverage reporting and analytics dashboards to reach “smart” testing decisions and actionable feedback, that support a continuous testing workflow. As the product matures, tests need maintenance, and some may be retired and replaced with newer ones.
  • Stable Lab and test environment is a key to ongoing CT processes. The lab should be at the heart of your CT, and should support the above platform coverage requirements, as well as the CT test suite with the test frameworks that were used to develop these tests.
  • Utilize if possible artificial intelligence (AI) and machine learning (ML)/deep-learning (DL) solutions to better optimize your CT test suite and shorten the overall release activities.

  • Continuous Testing is seamlessly integrated into the software delivery pipeline and DevOps toolchain – as mentioned above, regardless of the test framework, IDEs and environments (front-end, back-end, etc.) used within the DevOps pipeline, CT should pick up all relevant testing (Unit, Functional, Regression, Performance, Monitoring, Accessibility and more), execute them in parallel and in an unattended fashion, to provide a “single voice” for a Go/No-Go release – that happens every 2-3 weeks.

Lastly, for a CT practice to work time after time, the above principles needs to be continuously optimized, maintained and adjusted as things change either within the product roadmap or in the market.

Happy CT!

Mobile, Cross Browser Testing, DevOps and Continuous Testing Trends and Projections for 2018

As we about to wrap out 2017, It’s the right time to get ready to what’s expected next year in the mobile, cross-browser testing and DevOps landscape.

To categorize this post, I will divide the trends into the following buckets (there may be few more points, but I believe the below are the most significant ones)

  • DevOps and Test Automation on Steroids Will Become Key for Digital Winners
  • Artificial Intelligence (AI) and Machine Learning (ML)/ Tools alignment as part of Smarter Testing throughout the pipeline
  • IOT and Digital Transformation Moving to Prime Time

 

DevOps and Automation on Steroids

If in 2017, we’ve seen the tremendous adoption of more agile methods, ATDD, BDD and organizations leaving legacy tools behind in favor of faster and more reliable and agile-ready testing tools, such that can fit the entire continuous testing efforts whether they’re done by Dev, BA, Test or Ops.

In 2018, we will see the above growing to a higher scale, where more manual and legacy tools skills are transforming into more modern ones. The growth in continuous testing (CT), Continuous Integration (CI) and DevOps will also translate into much shorter release cadence as a bridge towards real Continuous Delivery (CD)

 

Related to the above, to be ready for the DevOps and CT trend, engineers need to become more deeply familiar with tools like Espresso, XCUITest, Earl Grey and Appium on the mobile front, and with the open-source web-based framework like the headless google project called Puppeteer, Protractor, and other web driver based framework.

In addition, optimizing the test automation suite to include more API and Non-Functional testing as the UX aspect becomes more and more important.

Shifting as many tests left and right is not a new trend, requirement or buzz – nothing change in my mind around the importance of this practice – the more you can automate and cover earlier, the easier it will be for the entire team to overcome issues, regressions and unexpected events that occur in the project life cycle.

AI, ML, and Smarter Test Automation

While many vendors are seeking for tools that can optimize their test automation suite, and shorten their overall execution time on the “right” platforms, the 2 terms of AI and ML (or Deep learning) are still unclear to many tool vendors, and are being used in varying perspectives that not always mean AI or ML 🙂

The end goal of such solutions is very clear, and the problem it aims to solve is real –> long testing cycles on plenty of mobile devices, desktop browsers, IOT devices and more, generates a lot of data to analyze and as a result, it slows down the DevOps engine. Efficient mechanism and tools that can crawl through the entire test code, understand which tests are the most valuable ones, and which platforms are the most critical to test on due to either customer usage or history of issues etc. can clearly address such pain.

Another angle or goal of such tools is to continuously provide a more reliable and faster test code generation. Coding takes time, requires skills, and varies across platforms. Having a “working” ML/AI tool that can scan through the app under test and generate robust page object model, and functional test code that runs on all platforms, as well as “responds” to changes in the UI, can really speed up TTM for many organization and focus the teams on the important SDLC activities in opposed to forcing Dev and Test to spend precious time on test code maintenance.

IOT and The Digital Transformation

In 2017, Google, Apple, Amazon and other technology giants announced few innovations around digital engagements. To name a few, better digital payments, better digital TV, AR and VR development API and new secure authentication through Face ID. IOT this year, hasn’t shown a huge leap forward, however, what I did notice, was that for specific verticals like Healthcare, and Retail, IOT started serving a key role in their digital user engagements and digital strategy.

In 2018, I believe that the market will see an even more advanced wave in the overall digital landscape where Android and Apple TV, IOT devices, Smart Watches and other digital interfaces becoming more standard in the industry, requiring enterprises to re-think and re-build their entire test lab to fit these new devices.

Such trend will also force the test engineers to adapt to the new platforms and re-architect their test frameworks to support more of these screens either in 1 script of several.

Some insights on testing IOT specifically in the healthcare vertical were recently presented by my colleague Amir Rozenberg – recommend to review the slides below

https://www.slideshare.net/AmirRozenberg/starwest-2017-iot-testing/ 

 

Bottom Line

Do not immediately change whatever you do today, but validate whether what you have right now is future ready and can sustain what’s coming in the near future as mentioned above.

If DevOps is already in practice in your organization, fine – make sure you can scale DevOps, shorten release time, increase test and platform automation coverage, and optimize through smarter techniques your overall pipeline.

AI and ML buzz are really happening, however, the market needs to properly define what it means to introduce these into the SDLC, and what would success look like if they do consider leveraging such. From a landscape perspective, these tools are not yet mature and ready for prime time, so that leaves more time to properly get ready for them.

Happy New 2018 to My Followers.

Enabling Mobile Testing In a Fast Growing DevOps Reality

6 months ago I launched my 1st book called “The Digital Quality Handbook”.

The book aims to address the key challenges in assuring high mobile (as well as web) quality, by avoiding pitfalls that are commonly practiced in the industry.

I have also recently joined the working group of ISTQB to influence the material in the mobile certification course, where I plan to include insights from the book as well.

In this book, I am hosting top leaders from the industry touching the most important aspects in assuring DevOps.

The above image is taken from Amazon recommending my book close to the leading DevOps practitioner books, this is another strong validation of the book relevancy and value.

Few highlights from the book are below:

  1. Shifting quality left and right to cover as many tests automatically throughout the release pipeline is a key to move faster and identify issues earlier in the process (Angie Jones from Twitter, Manish Maturia from InfoStretch and others provide practitioner level insights and tips)
  2. Testing on the right platforms and OS’s is a key to assure high quality across different devices (new, legacy, popular) in various locations and environments
    1. I am referring to this magazine, that I author on a quarterly basis in the book, and highly recommend subscribing to receive this free asset upon each release: http://info.perfectomobile.com/factors-magazine.html 
  3. Robust automation is achieved through best practices such as building a page object model (POM) and using unique object locators rather than flaky XPATHs etc. I am referring to a free online tool that can help score your object as part of your test automation development http://xpathvalidator.projectquantum.io/
  4. Testing not only via the UI is another key for success, so complementing UI testing with API level testing can reduce the time of testing, provide faster feedback and other values. This chapter was actually developed by my twin brother Lior Kinbruner 🙂 – worth checking it out!
  5. Performance testing and UX is another challenge and key to success. A full section of the book is dedicated to wind tunnel testing, user experience testing (JeanAnn Harrison contributes a lot here together with Amir Rozenberg).

The book was #1 in the new best selling book on Amazon, and still rocking today after more than 6 months. It is #43 as of today in the overall Software Testing Book which is a great validation and honor for me and the contributors.

 

If you still haven’t got a copy of the book, i really encourage you to do so – I am already planning on my next journey so stay tuned 🙂

Optimizing Mobile Test Automation Across The Pipeline

With the massive innovation that drives the digital market these days, organizations are continuing to develop features, as well as new test code to cover these features.

What I’ve learned is that often, the test code developers would not always stop and look back into their existing test suites and validate whether the new tests that are being developed are somehow a superset to existing ones. In addition, legacy tests are a continuous load and overhead on your SDLC cycles length if they are not being maintained over time.

Oil Transport

Many Owners To The Same Problem

Since we live in an agile/DevQAOps world, test code development is not a QA only problem, but rather everyone3s. Tests are being executed throughout the pipeline from Dev to integration and pre/post production testing.

Use of smart tagging mechanism for your test scenarios (login), suites (App A) and types unit, regression) can be a good step towards gaining control over your tests.

Without some context, discipline, and continuous structured validation of the tests, it will become harder as you progress your SDLC to debug, analyze and solve defects (would be like finding the key in the below visual mess)

Find the Key in the Picture

Recommended Practices

  • Develop the tests with context, tags and proper annotations that would make sense to you and your team even 12 months from the development day. Make sure that in your execution reports you then have a way to filter using these annotations to only get the view of a given functional area, platform etc.
  • Match your device under tests capabilities to the test code and application under test. Make sure that you focus e.g. your fingerprint based tests only on the devices that support it (API XX and above).
  • Perform test code review every agreed upon time – in such review, group your feature specific test suites and try to optimize, merge, eliminate flakiness, identify missing coverage areas etc. It is harder to do it as the time progresses, so depending on your release cadence and test development maturity, set the right goals – more reviews would be better than less – it will also be shorter and more efficient that way since the delta between such review will be smaller.
  • Drive joint Dev, Test, Product, Marketing decisions based on data – When you have the ability to get quality analysis from your entire test suites, it is recommended to gather all counter parts and brainstorm on the findings. Which tests are most effective, can we shrink based on the data the release cycles, are we missing tests for specific areas, are there platforms that are more buggies than others, which tests takes longer than others to finish etc.
  • Optimize your CI and build-acceptance testing – based on the above intelligence, teams can reach data driven decision about what to include in their CI as well. Testing in the build cycle via CI should be fast, reliable with zero false positives. With quality insights on your tests, you can decide and certify the most valuable and fastest tests to get into this CI testing, and by that to shrink the overall process without risking coverage aspect.

CI_Dash1.png

Bottom Line

A test is code, and like you refactor, maintain, retire and improve your code, you should do the same to your tests. Make sure to always be in control over your tests, and by that, gain control over your quality of your app in a continuous manner.

Happy Testing!

Criteria’s for Choosing The Right Open-Source Test Automation Tools

I presented last night at a local Boston meetup hosted by BlazeMeter a session together with my colleague Amir Rozenberg.

The subject was the shift from legacy to open-source frameworks, the motivations behind and also the challenges of adopting open-source without a clear strategy especially in the digital space that includes 3 layers:

  1. Open source connectivity to a Lab
  2. Open-source and its test coverage capabilities (e.g. Can open-source framework support system level, visual analysis, real environment settings and more)
  3. open-source reporting and analysis capabilities.

During the session, Amir also presented an open-source BDD/Cucumber based test framework called Quantum (http://projectquantom.io)

Full presentation slides can be found here:

Happy Reading

Eran & Amir

How to Efficiently Test Your Mobile App for Battery Drain?

With my experience in the mobile space over the past 2 decades, I rarely run across efficient mobile app testing that assures resource usage by the app as part of the overall test strategy and test plan.

Teams would often focus on the app usability, functionality, performance and security and as long as the app performs what it was designed to do – the app will get pushed to production as is.

Resource Consumption As an App Quality Priority

Let’s have a look at one of 2016 most popular mobile native apps, that is Pokemon Go. This mobile app alone, require constant GPS location services being active, it keeps the screen fully lit when in the foreground, operates the camera, plays sounds and renders 3d graphics content.

If we translate the above resource consumption when running this App on a fully charged Android device, research shows that in 2 hours and 40 minutes the phone will drop from 100% to 0% battery.

The thing is of course, that the end user will typically have at least 10 others apps running in the background at the same time, hence the battery drain of the device will be of course faster.

From a recent research done by AVAST, you can see 2 set of greediest apps in the market in Q3 2016. The 2 visuals below taken from the report show 2 sets of apps – 1 that is usually launched at the device startup, and the 2nd set of apps mostly launched by the users.

batteryd1

batteryd2

How to Test the App for Battery Drain?

Teams need to come as close as possible to their end-users, this is a clear requirement in today’s market. This means that from a battery drain testing perspective, the test environment needs to mimic the real user from the device perspective, OS, network conditions (2G, 3G, Wifi, Roaming), background popular apps installed and running on the device and of course a varying set of devices in the lab with different battery states.

  • Test against multiple devices 

Device hardware is different across both models manufacturers. Each battery will obviously have a limited capacity than the other. Each device after a while will have degraded battery chemistry that impacts the performance, the duration it can last and more. This is why a variety of new, legacy and different battery capacities needs to be a consideration in any mobile device lab. This is a general requirement for mobile app quality, but in the context of battery testing – this gets a different angle that ought to be leveraged by the teams.

  • Listen to the market and end-users’

Since the market constantly changes, the “known state” and quality of your app including battery consumption and other resources consumption may change as well. This can happen due to app different performance on a new device that you have no experience with or it can be due to a new OS version released to the market by Google or Apple – we have seen plenty of examples like that, including the recent iOS 10.2 release.

It is very hard to monitor these things in products, so one advice should be to start testing the app on OS Beta versions and measure the app battery consumption prior to the OS is released as GA – this can eliminate issues around new OS versions. Other methods that are commonly used by mobile teams is to monitor the app store and either get notified by the end-users’ about such issues (less preferred). Continuously including such tests on a refreshed device lab will reduce the risks and identify issues earlier in the cycle and prior to production. Make these tests or a subset of these part of your CI cycle to enhance test coverage and reduce risks.

screenshot-2016-12-22-at-01-42-05

Summary

In today’s market, there is not good automation method to test app battery drain, therefore my recommendation is to create a plethora of devices in the lab with varying conditions as mentioned above and measure the battery drain through native apps on the devices as well as timer measurements. The tests should be first against the app running on a clean device and than on a real end user device.