Mobile Testing: Difference Between BDD, ATDD/TDD

Last week I presented in the Joe Colantonio AutomationGuild online conference – Kudos to Joe for a great event!

ag-logo-small

Among multiple interesting questions that I got post my session,  like what is the best test coverage for mobile projects? how to design effective non-functional and performance testing in mobile and RWD?, I also got a question about the differences between BDD and ATDD.

My session was around an Open Source test automation framework called Quantum that supports cucumber BDD (Behavior Driven Development) and this obviously triggered the question.

Definition: BDD and ATDD

ATDD Acceptance Test Driven Development

Based on Wikipedia’s definition (referenced above), ATDD is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example,behavior-driven development (BDD),example-driven development (EDD), and support-driven development also called story test–driven development (SDD).

All these processes aid developers and testers in understanding the customer’s needs prior to implementation and allow customers to be able to converse in their own domain language.

ATDD is closely related to test-driven development (TDD). It differs by the emphasis on developer-tester-business customer collaboration. ATDD encompasses acceptance testing, but highlights writing acceptance tests before developers begin coding.

BDD Behavior Driven Development

Again, based on Wikipedia’s definition (referenced above), BDD is a software development process that emerged from test-driven development (TDD)Behavior-driven development combines the general techniques and principles of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software development and management teams with shared tools and a shared process to collaborate on software development.

Mobile Testing In the Context of BDD and ATDD

The way to look at the 2 agile like practices of BDD, ATDD, TDD is from the context of higher velocity and quality requirements.

Organizations are aiming to release faster to market, with great quality, sufficient test coverage and in the same time of course – meet the business goals and customer satisfaction. To achieve these goals, teams ought to be strongly collaborative from the very beginning app development and design stages.

Once organizations have the customer product requirements, and they can start developing through user stories, acceptance criteria’s and such the product & the tests several goals can be met:

  • High customer-vendor alignment == Customer satisfaction
  • Faster time to market, app is tested along the SDLC
  • Quality is in sync with customer needs and there are much less redundant tests
  • There are no Communication gaps or barriers between Dev, Test, Marketing, Management

 

Looking at the below example of a BDD-based test automation test code, it is very easy to understand the functionality and use cases under test, the desired test outcome.

quantum123

As can be seen in the screenshot above, the script installs and launches on an available Samsung device the TestApp.APK file performs a successful login and presses on a menu item. As a final step, it also performs a mobile visual validation to assure that the test also passes, and also as an automaton anchor, the test code reached the expected screen.

It is important to mention that the test framework, tools that can support both TDD, ATDD and BDD can be in many cases similar, and in our case above – one can still develop and test from a BDD or ATTD standpoint by using a Cucumber test automation framework (Cucumber, Quantum).

If we would compare the above functional use case, or as stated in the cucumber language “Scenario” to a scenario that would fit an ATDD based approach – we would most likely need to introduce the known “3 amigos” approach  –> three perspectives of customer (what problem are we trying to solve?), development (how might we solve this problem?), and testing (what about…).

 

Since a real ATDD best practice will determine a Gherkin like app scenario’s before the development even starts, the above BDD example will be a precondition test for the app development team to make sure that they develop against acceptance criteria that in our example is a successful app install and log in.

An additional example of an acceptance test that also involves a layer of login/Register that I can reference would like this:

effective-testing-practices-in-an-agile-environment-28-638

I can understand that confusion between BDD and ATDD since as mentioned above, they can look a lot like the same.

Bottom line, and as I responded to the event last week – both BDD/ATDD/TDD are methods to better sync the various counterparts involved in shipping a working product to the market, faster, with higher quality and with the right functionality that would meet the customer requirements. Implementing it using Gherkin method makes a lot of sense due to the easy alignment and common language these counterparts uses during the SDLC workflow.

Happy Testing!

How to Efficiently Test Your Mobile App for Battery Drain?

With my experience in the mobile space over the past 2 decades, I rarely run across efficient mobile app testing that assures resource usage by the app as part of the overall test strategy and test plan.

Teams would often focus on the app usability, functionality, performance and security and as long as the app performs what it was designed to do – the app will get pushed to production as is.

Resource Consumption As an App Quality Priority

Let’s have a look at one of 2016 most popular mobile native apps, that is Pokemon Go. This mobile app alone, require constant GPS location services being active, it keeps the screen fully lit when in the foreground, operates the camera, plays sounds and renders 3d graphics content.

If we translate the above resource consumption when running this App on a fully charged Android device, research shows that in 2 hours and 40 minutes the phone will drop from 100% to 0% battery.

The thing is of course, that the end user will typically have at least 10 others apps running in the background at the same time, hence the battery drain of the device will be of course faster.

From a recent research done by AVAST, you can see 2 set of greediest apps in the market in Q3 2016. The 2 visuals below taken from the report show 2 sets of apps – 1 that is usually launched at the device startup, and the 2nd set of apps mostly launched by the users.

batteryd1

batteryd2

How to Test the App for Battery Drain?

Teams need to come as close as possible to their end-users, this is a clear requirement in today’s market. This means that from a battery drain testing perspective, the test environment needs to mimic the real user from the device perspective, OS, network conditions (2G, 3G, Wifi, Roaming), background popular apps installed and running on the device and of course a varying set of devices in the lab with different battery states.

  • Test against multiple devices 

Device hardware is different across both models manufacturers. Each battery will obviously have a limited capacity than the other. Each device after a while will have degraded battery chemistry that impacts the performance, the duration it can last and more. This is why a variety of new, legacy and different battery capacities needs to be a consideration in any mobile device lab. This is a general requirement for mobile app quality, but in the context of battery testing – this gets a different angle that ought to be leveraged by the teams.

  • Listen to the market and end-users’

Since the market constantly changes, the “known state” and quality of your app including battery consumption and other resources consumption may change as well. This can happen due to app different performance on a new device that you have no experience with or it can be due to a new OS version released to the market by Google or Apple – we have seen plenty of examples like that, including the recent iOS 10.2 release.

It is very hard to monitor these things in products, so one advice should be to start testing the app on OS Beta versions and measure the app battery consumption prior to the OS is released as GA – this can eliminate issues around new OS versions. Other methods that are commonly used by mobile teams is to monitor the app store and either get notified by the end-users’ about such issues (less preferred). Continuously including such tests on a refreshed device lab will reduce the risks and identify issues earlier in the cycle and prior to production. Make these tests or a subset of these part of your CI cycle to enhance test coverage and reduce risks.

screenshot-2016-12-22-at-01-42-05

Summary

In today’s market, there is not good automation method to test app battery drain, therefore my recommendation is to create a plethora of devices in the lab with varying conditions as mentioned above and measure the battery drain through native apps on the devices as well as timer measurements. The tests should be first against the app running on a clean device and than on a real end user device.

How To Adapt to Mobile Testing Complexity Increase Due to External App Context Features?

If you follow my blogs, white papers and webinars for the past years you are already familiar with the most known common challenges around mobile app testing such as:

  • Device/OS proliferation and market fragmentation
  • Ability to test the real end-user environment within and outside of the app
  • testing both visual aspects/UI as well as native elements of the app
  • Keeping up with the agile and release cadence while maintaining high app quality
  • Testing for a full digital experience for both mobile, web, IOT etc.

 

If the above is somehow addressed by various tools, techniques and guidelines there is a growing trend in the industry in both iOS and Android platforms that are adding another layer of complexity to testers and developers. With iOS 10 and Android 7 as the latest OS releases but also with earlier versions, we start to see more abilities to engage with the app outside of the app.

imessage-apps-2-800x525

If we look at the recent change made in iOS 10 around iMessage, it is clear that Apple is trying to enable mobile app developers better engagement with their end-users’ even outside of the app itself.  Heavy messaging users can remain in the same app/screen that they’re using and respond quickly to external apps notifications in various ways.

This innovation is a clear continuation to the Force Touch (3D Touch) functionality that was introduced with iOS 9 and iPhone 6S/6S Plus that also allows users to click on the App icon without opening the full app and perform a quick action like writing a new facebook status, upload an image to facebook or other app related activities.

Add to the above capabilities the recent Android 7.1 App Shortcuts support which allow users to create relevant shortcuts on the device screen for app capabilities that they commonly use. More example that you can refer to is the Android 7.0 split window feature – allowing app to consume 1/2 or 1/3 of the device screen while the remaining screen is allocated to a different app that might compete with yours on HW/System resources etc.

So What Has Changed?

Quick answer – A lot 🙂

As I recently wrote in my blog around mobile test optimization, the testing plans for different mobile OS versions is becoming more and more complex and requires a solid methodology so teams can defer the right tests (manual/automation) to the right platforms based on supported features of the app and the capabilities of the devices – testing app shortcuts (see below an example)  is obviously irrelevant on Android 7.0 and below, so the test matrix/decision tree needs to accommodate this.

appshortcuts

To be able to test different app context you need to make sure that you have the following capabilities from a tool perspective in place and also to include the following test scenarios in your test plan.

  1. Testing tools now must support the App under test and also the full device system in order to engage with system popups, iMessage apps, device screen for force-touch based testing etc.
  2. The test plan in whatever tree or tool is being managed, ought to accommodate to the variance between platforms and devices and allow relevant testing of apps–>features–>devices (see my referenced blog above for more insights)
  3. New test scenarios to be considered if your app leverages such capabilities
    1. What happens when incoming events like calls or text messages occur while the app interacts within an iMessage/Split screen/shortcut etc. also what happens when these apps receive other notifications (lock screen or within the unlocked device screen)
    2. What happens to the app when there are degraded environment conditions like loss of network connection, flight mode is on etc. – note that apps like iMessage rely on network availability
    3. If your app engages with a 3rd party app – take into account that these apps are also exposed to defects that are not under your control – Facebook, iMessage, others. If they are not working well or crashes, you need to simulate early in your testing activities such scenario and understand the impact on your app and business
    4. Apps that work with iMessage as an example might require a different app submission process and also might be part of a separate binary build that needs to be tested properly – take this into account.
    5. Since the above complexities are all dependent on the market and OS releases, make sure that any Beta version that is released gets proper testing by your teams to ensure no regressions occur.

I hope these insights can help you plan for a new trend/future that I see growing in the mobile space that IMO does add an extra layer of challenges to existing test plans.

Comments are always welcomed.

Happy Testing!

3 Ways to Make Mobile Manual Testing Less Painful

With 60% of the industry still functioning at 30% mobile test automation it’s clear that manual testing is taking a major chunk of a testing team’s time. As we acknowledge the need for both manual and automation testing, and without drilling down into the caveats of manual testing, let’s understand how can teams can reduce the time it takes, and even transition to an automated approach to testing.

1. Manual and Automation Testing: Analyze Your Existing Test Suite

You and your testing team should be well-positioned to optimize your test suite in one of the following ways:
– Scope out irrelevant manual tests per specific test cycles (e.g. not all manual tests are required for each sanity or regression test)
– Eliminate tests that consistently pass and don’t add value
– Identify and consolidate duplicate tests
– Suggest manual tests that are better-suited for automation (e.g. data driven tests or tests that rely on timely setup of environments)

The result should be a mixture of both manual and automation testing approaches, with the goal of shifting more of the testing toward automation.

manual and automation testing mix

2. Consider a Smooth Transition to “Easier” Test Frameworks

In most cases, the blocker for increasing test automation lies inside the QA team, and is often related to a skills gap. Today’s common mobile test automation tools are open-source, and require medium to high-level development skills in languages such as Java, C#, Python and Java Script. These skills are hard to find within traditional testing teams. On the other hand, if QA teams utilize alternatives such as Behavior Driven Development (BDD) solutions like Cucumber, it creates an easier path toward automation by virtue of using a common language that is easy to get started and scale.

manual and automation testing cucumber behavior driven development

3. Shift More Test Automation Inside the Dev Cycle

When thinking about your existing test automation code and the level of code coverage it provides, there may be a functional coverage overlap between the automated and manual testing. If the automation scripts are shared across the SDLC and are also executed post-commit on every build, this can shrink some of the manual validation work the testing team needs to do. Also – and miraculously related! – by joining forces with your development and test automation teams and having them help with test automation, it will decrease workload and create shorter cycles, resulting in happier manual testers.

Bottom Line:

Most businesses will continue to have a mix of manual and automation testing. Manual testing will never go away and in some cases, it is even a product requirement. But as you optimize your overall testing strategy, investing in techniques like BDD can make things much easier for everyone involved with both manual and automation testing throughout the lifecycle.

manual automation testing with real devices

4 Benefits of Using the Espresso Test Automation Tool

If you’re an Android developer, you’re probably familiar with Google’s Espresso test automation framework. As an open-source tool, it’s very easy for developers to use and extend within their working environment (Android Studio IDE).

But before discussing the benefits of Espresso, let’s understand the motivations and pains developers and test automation engineers face today while trying to validate their Android application (APK) throughout the build/dev/test workflow.

  • Each build needs to be validated after code changes are made.
  • Dependencies on remote servers and other workstations for testing slow down the process.
  • Unit and functional tests need to be easy to execute from both an IDE and continuous integration perspective.
  • Apps need to be tested using the latest Android OS APIs that support new platform features and OS versions.
  • Testing needs to occur on both emulators and real devices.

In light of these challenges, it’s clear why the adoption of the Espresso automation framework is high. Even though Espresso is an instrumentation-based test framework, it has many benefits to both developers and test automation engineers. It uses Junit underneath the hood, so Espresso is easy to use within leading IDEs and provides useful testing annotations and assertions. It’s also fully integrated within the leading Google Android IDE – Android Studio.

Here are four main benefits of using Espresso:

1. Espresso workflow is simple to use

The way Espresso works is by allowing developers to build a test suite as a stand-alone APK that can be installed on the target devices alongside the application under test and be executed very quickly.

2. Fast and reliable feedback to developers

As developers are trying to accelerate deployment, Espresso gives them fast feedback on their code changes so they can move on to the next feature or defect fix; having a robust and fast test framework plays a key role.

Espresso does not require any server (like Selenium Remote WebDriver) to communicate with; instead it runs side-by-side with the app and delivers very fast (minutes) test results to the developer.

3. Less mobile testing flakiness

Because Espresso offers a synchronized method of execution, the stability of the test cycle is very high. There’s a built-in mechanism in Espresso that, prior to moving to the next steps in the test, validates that the Element or Object is actually displayed on the screen. This eliminates test execution from breaking when confronted with “objects not detected” and other errors.

4. Developing Espresso test automation isn’t hard

Developing Espresso test automation is quite easy. It is based on Java and Junit, which is a core skillset for any Android app developer. Because Espresso works seamlessly within the Android Studio IDE, there’s no setup or ramping up and no “excuses” – to actually shift quality in the in-cycle stage of the app SDLC.

In addition to the above, there is of course the large community powered by Google that pushes the Espresso test automation framework and allow easy and fast ramp up for newcomers.

Learn more using the Espresso Cheat sheet below:

Espresso Test Automation Framework

Perfecto is offering support for both Android Studio IDE as well as the ability to install and launch an Espresso test suite (APK) on real devices in the cloud across various locations and user conditions. For more information, please refer to the Perfecto Community and search for “Android Studio” or “Espresso.”

Joe Colantonio’s Test Talk: Mobile Testing Coverage Optimization

How does a company nowadays put together a comprehensive test strategy for delivering high-quality experiences for their applications on any device? I think this is the question I get asked most frequently and it is the biggest challenge in today’s market, how to tackle mobile testing and responsive web testing. The solution can be the difference between an app rated 1 star or an app rated 5 stars.

Play Podcast

I had a lot of fun talking to Joe Colantonio from Test Talks about how to create a successful app starting with my Digital Test Coverage Optimizer. Listen to the full talk to hear my ideas on moving from manual testing to automation, tracking the mobile market, the difference between testing in simulators and emulators versus real devices and more.

https://joecolantonio.com/testtalks/110-mobile-testing-coverage-optimization-eran-kinsbruner/

 

JC

#30DaysofTesting – Day 8 Reporting

As the #30daysoftesting challenges continues, i have decided today to put the famous iOS Native LinkedIn mobile app and perform some exploratory testing on it using my iPhone 6 Plus device running iOS 9.X

IMG_9531

Today’s challenge was about finding up to 5 different defects and reporting them back to the app vendor.

Here are my findings:

  • Searching through the contact list (my list of contacts overpasses 2500 members) is simply unusable since the A-Z side bar is non proportional with the page size, so basically trying to filter by letter (e.g. “K”) is very hard

IMG_9526

  • App crashed twice when entering long string of characters into the search bars either for searching contacts/groups or messages
  • Sharing a message from the app – DOES NOT WORK. You can only share from the app main screen an update but not a message.

IMG_9527

  • Redundant “Close Page” link in various EULA/Privacy web pages – accessible through setting screen (Privacy Policy, User Agreement, EULA)

IMG_9528

 

I’ve reported back to LinkedIn about these defects, below is their confirmation email – pending their response.

LIIssues

Happy #30Daysofesting To All Of You.

 

Eran

Responsive Web: Test for the Real User Experience

One of the great benefits of building a responsive web site (RWD) is it can give the user a consistent web experience across any digital device, in any location.

Related Post: Responsive Web and Adaptive Web: Pros and Cons

When it comes to RWD testing, it’s important to test the navigation and functionality on desktop web browsers and mobile devices, but that alone is not enough to guarantee a consistent user experience at all times. The end user is constantly moving between environments throughout the day, and these environments have various attributes, including:

  1. Network conditions (Poor, good, no network)
  2. Locations
  3. App context based on platform and location
  4. Background activities (apps running and consuming resources)
  5. Ads and other popups that block your site content (see image below)

IMG_8543

With so many real user environments to consider for both mobile and the desktop web, testing teams should include user conditions in their RWD test plan on top of the traditional testing for UI, navigation, functionality and client-side performance. It will give your DevTest team peace of mind and reduce quality risks significantly.

To learn more, check out our new Responsive Web Testing Guide.

responsive web testing strategy

Responsive Web: Five Testing Considerations

With more and more consumers expecting to shop, bank, work and socialize across different devices, organizations are embracing responsive web design (RWD) as a tool to help them deliver a consistent digital experience on every screen.

multiplatform-1024x636

Growth of cross-device transactions (Source: Criteo’s State of Mobile Commerce Report)

But due to the complexity of digital environments and user experiences — responsive web is easier said than done. Organizations that develop RWD sites often face challenges when testing to assure smooth website navigation and a great user experience across multiple devices and platforms.

For more information, read our new Comprehensive Guide to Building a Responsive Web Testing Strategy

To get there, we recommended including the following five building blocks as part of your RWD test plan.
RWDTests-1024x368

Testing for these five areas will help achieve sufficient test coverage, a great user experience and higher traffic to your site.

To download the complete guide for testing RWD Site, go here

responsive-web-testing-strategy-2-600x315

User Conditions Importance in Web App Testing

Usually when talking about Web testing, QA teams tend to stick with functional, layout and performance testing. In this blog we would highlight the importance of testing for web apps from the end-user perspective and assuring that the experience is also covered in the test plans.

In today’s competitive landscape many web app sites are exposed to a lot of interrupts whether from web add-ons or in the majority of cases adware popups which often interrupt the user flows within the web page or even cover existing and important content from your web page.

Screen-Shot-2015-09-09-at-11.54.14-AM-640x275









Source: Mashable


In such cases as mentioned above, Dev and Test team need to asses the impact of such pop ups from upper bar security and other add-ons interruptions together with larger ad pop ups covering your web content – as seen above, sometime even when blocking the ads would not solve the problems since the ad vendor will sense that they are on and will pop a different screen.

As a supporting fact to the pain ads are adding to web site owners, we see reports of more than 5.5 millions downloads of Adblock plus tool for FireFox browser just over the last 30 days.

As a quick tip in this blog – enhance your web test plan beyond functionality and cover the cases of environment conditions.