The Importance of Software Testing for Businesses

Guest Blog from: Maverick Jones
Maverick Jones is a full-time geek and tech enthusiast. He likes to share his bylines and loves to gain audience attention.

Table of Contents

  • Introduction
  • What is Software Testing?
  • Types of Software Testing
  • 7 Reasons of Software Testing Being Important for Businesses
  • Conclusion

In this digital world, software is the one thing that influences a far bigger audience these days. It enhances the business prospects, contributes to upscale life standards, offers services to large audiences in less time, and evolves a smart world. Besides this, the software offers an industrial impact and plays an important role around all the walkways of people. But for this, it requires a fail-proof and smooth functioning of the software. And the main reason behind it is that software failures can shatter the business. Therefore, testing any software is a must.

Software testing ensures that the software performs adequately and offers user satisfaction. Testers make sure that the performance of the software is not sacrificed and for that, they run various tests using both manual and automated testing approaches.

In this blog, we will learn more about software testing and see how this concept enables business software to work smoothly.

What is Software Testing?

Software testing is a very critical process that offers an analysis that enables the testers to identify and evaluate the application. It determines whether the newly developed app meets the Business Requirement Specifications. It is a continuous process and is popularly known as Software Testing Life Cycle. This process works along with the life cycle of software development.

Software testing verifies each functionality at every stage and validates the app performance with the need. Besides, when it comes to checking software perfectly and delivering bug-free software, the approach of software testing can help in improvising the functionalities and usability of apps. With different types of testing approaches, software development companies test applications and make them safer for their clients.

Types of Software Testing

Software applications can be validated and verified in two ways. By applying two different types of testing – Automation and Manual Testing.

Automation testing is an approach where the testers write different scripts and then they rely on external software to perform software tests. Besides, when the testers want to double-check the product’s quality, even the manually tested software has to go through the automation tools. And this is the type of testing that assures great accuracy level and cost-efficiency. Besides, it also saves a lot of time.

Besides, manual testing is quite different. And as the name suggests, it is a very simple process of testing the software manually. It doesn’t require any type of automated testing tool that enables the testers to check the software automatically. Instead, the tester can work on the software and perform the testing process. The manual tester can easily detect errors and bugs by following through with different levels.

7 Reasons of Software Testing Being Important for Businesses

Here are some of the top reasons that show the importance of software testing –

1.   Security

Security is one of the most important reasons for software testing. And to consider it can be the smartest choice of it all. Security is considered a very crucial, sensitive, and vulnerable part of software testing. Besides, there are situations in which all the important details and information of the company gets stolen away. Therefore, security is considered the most reliable reason to make people believe in the quality of the software. Security is one thing that ensures that the product is safer to use.

2.   Customer Satisfaction

The main objective of the business owners of the products is to provide the best services and better customer satisfaction. In order to offer quality and customer satisfaction, adopting a software testing process is essential as it provides the perfect user experience. As the software development company tries to develop the best software for their clients with a testing approach, they will earn a reputation for reliable clients.

Therefore, to get praise from the clients and offer them a system that enables long-term benefits, having a perfect software testing approach is very important. A properly tested and secured system earns the client’s trust easily and also becomes safer to make transactions.

3.   Product Quality

To ensure that some of the products come to life in a secured manner, following the requirements of the product is necessary. The product you offer to the clients must be such that it serves them in one or another way. Besides, it must always bring out value as promised. And to make sure that it functions correctly by offering an effective customer experience, testing the products before their launch is necessary.

Every testing team must check the compatibility of the device before launching the product or application. For example, when you are planning to launch a software, checking that it is compatible with different devices and operating systems is a must. This shows the quality of the software.

4.   Saves Money

Software testing comes with various benefits and one of them is cost-effectiveness. This is the main reason why organizations go for software testing services. The software testing comprises a bunch of projects. When the testers find bugs in the early stage, fixing those bugs doesn’t cost much. Therefore, it has become a prerequisite.

Every business owner hires a tester or quality analyst who has great experience and thorough knowledge of the projects.

5.   Determines Software Performance

When an application or software has low performance, it brings down the app’s reputation in the market. The users no more prefer to use that application and even stop trusting the company that offers such products. And eventually, the company’s reputation also suffers. Therefore, every application or software must never be introduced in the market without software testing.

Once the software is tested and its performance is perfect, it must be launched in the market. And even after the launch, to keep the reputation of the product intact, continuous testing becomes necessary. Basically, if the performance of the software goes down, everything goes to waste. Therefore, considering software testing is the best option in determining the software performance is a must.

6.   Easy to Add New Features

When the code is old and interconnected, it becomes more difficult to update. And this is why tests are performed that counteract this calcification tendency and enable the developers to add new features to the software. But when the software developer is new, changing older parts of the code becomes horrifying for him. And this is where test results come in to offer every detailed information about the broken parts of the code.

This process enables the business owners to make their products stand out in the market.

7.   Enhances Development Process

With the help of Quality Assurance, the software testing services providers can find an array of errors and scenarios. And once they get a hold of the information about bugs, the developers can easily help in fixing them in very little time. the same in no time. To have such a smooth development process, the software testers should work with the development team parallelly in order to accelerate the procedure.

Conclusion

As seen in this blog, the importance of software testing is of high value in the world of software development. Software testing is a fine whole process that ensures the perfect quality of the product. Besides, testing also offers better usability, enhanced functionality, and reduces maintenance costs. Therefore, whenever you hire a software development company for your project, make sure to hire a team that has expert software testers who can use the latest tools and techniques to secure your product.

This Blog was provided by: Maverick Jones
Maverick Jones is a full-time geek and tech enthusiast. He likes to share his bylines and loves to gain audience attention.

Code-Based Test Automation vs. Codeless Automation

 

As more advanced technologies are entering the continuous testing landscape powered by AI/ML, organizations and especially, practitioners, are debating which is better, and why if any should they adopt codeless test authoring solutions?

In this blog, I will be providing the various considerations to switch, and/or combine between the 2 test automation methods.

TL,DR –> There isn’t a magic answer to this debate, and there isn’t a method that is good vs. bad.

Top Considerations

To better address the question on when and why to use either methods, here are the top items to consider, not listed by importance, since each team might relate to different objectives and priorities:

  • What are the application use cases and flows to automate?
  • Which persona is going to create and maintain these scenarios?
  • What are the skill-set within the team/individuals for the job?
  • Is the app under test Mobile/web/responsive/desktop?
  • What are the time constrains for the project, and what are the release cadence going forward (weekly/monthly)?
  • Is the test suite meant to be integrated to other tools (CI/CD/Frameworks)?
  • Are there any advanced automation scenarios (chatbots, IOT, location, audio, etc.)?
  • What are the cost boundaries (tools, project, lab, etc.)
  • Is the test suite meant to be executed at large scale?
  • Is the project a new one, or an additional layer of coverage on top of existing code-based suite? (Selenium/Appium, etc.)

Diving Deeper

Now that we’ve listed some important consideration, let’s explain them a bit deeper.

For teams that are already working on a project (web/Mobile), and have implemented a good working amount of code-based test scenario that are embedded into processes, CI/CD, and other triggers, such consideration should be heavily considered – what is the incentive to change? are there coverage gap in the code-based suite? is there too much noise and flakiness attached to the existing test code? etc. Only if there is a good incentive like the ones mentioned, teams should consider adding codeless test scenarios into their pipeline.

On the other hand, for teams that are just starting a new project, that’s a perfect timing to look into the entire team skills, and decide based on the technology the project is built on, what tools to use – if might make sense for example, for a newly created website to combine a Selenium framework that SDETs that are Java/JavaScript developers would lead together with business testers that can remove some of the load from them via ML-Driven codeless selenium tools.

After covering use cases, quality of existing test suite, new vs. existing projects, lets also consider the time-frames and budget allocated to the project. It is clear that recording codeless scripts takes on average 6-10 times faster than coding the same scenario in Java or other development language. It involves setting up the platform and test environment, coding, debugging, executing at scale, assertions, and more. This obviously also translate into $$ savings. On the other hand, not all test scenarios can be easily recorded, since, for some advanced flows, coding might be a better approach and an easier one to maintain over time. This is why, sometimes it is better to look at the job-to-be-done, prior to rushing into scripting.

Code and Codeless Creation is Key for High Test Automation Coverage

Next in the overall debate is the Eco-system and tools landscape within the organization. Including a new technology is not easy, often not well-accepted, and also not always justified. In today’s reality when squad teams are working together and consists of a variety of resources with varying skills, objectives, and preferences, integrating a new technology should be done with proper consideration, and with proper validation that it “plays” nice within the other existing tools. Codeless tools in that context, should fill in an important gap within the team, integrate well into existing CI/CD and other processes, and not cause duplication of effort, or additional overhead.

Lastly for this blog (not for the entire debate), I would touch the cost of maintenance of test automation scripts. This is perhaps one of the problematic items for any test automation team. Writing a script once, making it run across platforms over time is an easier said than done task. Applications are constantly changing, and so does the platforms under test (mobile devices/OS versions, Browsers), therefore, scripts needs to be properly maintained to ensure clean and noise-free pipeline. Codeless in many ways addresses such challenge through self-healing of elements, test steps, and more. It can be also achieved within code-based projects through advanced reporting and analysis, with automated root-cause analysis and other methods, but codeless does shine the most in such cases.

Perfecto/TestCraft Object Self-Healing and Weight Mechanism (ML)

Bottom Line

I tried to keep this blog short and to the point, and leave the practitioners the decision-making across the 2 methods. As written in this post, there are plenty of questions to address prior to adopting a codeless tool, and how to combine it within existing code-based suites. Combining both methods in my honest opinion is the way to go in the future, and the way to maximize the overall test automation coverage with greater efficiency across the teams. Make the right decision that fits the project now, and also in the long-run.

I will be running a live webinar covering the exact blog topic hosted by Shani Shoham from 21. Feel free to register here

Mobile Testing Coverage Strategy: OS Families vs. Minor OS Versions?

As the author of the Factors coverage magazine, I am often asked about the operating system (OS) testing strategy.

Some teams would state that testing on the Latest, Latest – 1, and Latest -2 for Android is sufficient, and for iOS testing on the Latest and Latest -1 would satisfy their coverage risks. I actually say that it is not about covering the specific minor OS version, but covering sufficiently the representative OS families. In this post, i will explain my recommended strategy.

Mobile OS Versions Market Share

Late September 2017, Android platform is led by top 5 OS families (4.x, 5.x, 6.x, 7.x and latest 8.x).

In fact, Android 4.x includes 2 families – JellyBeans and KitKat, with the latest being more widely used and popular.

OEM’s are very late in updating their smartphones to the latest OS version, and when they do, they only update the most recent smartphone versions and not the legacy ones that in many cases, cover a large user base.

For Android, as we can see above, any app developer that supports the global market, cannot disregard a market share of 15.1% like the KitKat holds – KitKat is by far not a Latest -2 or even – 4 OS platform. Testing, therefore, an Android App needs to go quite far in the history of the Android OS versions to provide sufficient and risk-free testing.

iOS isn’t that simple either. This platform, especially after the recent iOS11 release, is now split into 3 major OS families that include iOS9.3.5, iOS 10.3.3 and iOS11.0. Each iOS family supports quite important devices that were left behind and did not receive the most up to date iOS update. As an example, iPhone 4S, iPad 2, iPad Mini 2 are stuck on iOS9.3.5, and iPhone 5C, iPhone 5 and iPad 4th Generation are stuck on iOS 10.3.3.

 

While the above market share still doesn’t reflect the iOS11 new market share, there is still 9% of iOS9.x users out there and there will be at least a similar number for iOS10.x once most users shift to iOS11. As in Android, here also we see 3 major OS families that need to be included in the testing strategies.

Beta Testing

Many organizations are being naive to the market innovation, and are not taking advantage of the public beta releases and developer previews coming from both Apple and Google. Android Oreo (8.0) and iOS11 were available for testing prior to their general availability release, however, many teams didn’t leverage this and are finding late in the game the defects, and in many cases, are hearing about them from their customers.

Above is the iOS11 GA bug that was reported, while below is an Android Oreo OS version specific defect that impacted many end-users.

 

Recommendations

Each customer has its unique user base, target geographies and in many cases also access to the user’s analytics.

Customers should follow the following guidelines

  • Monitor your user base and build a test lab that addresses the top user’s devices/OS permutations (use your own analytics)
  • Learn and monitor the market trends like the above so they do cover in addition to their analytics the major OS families (5-10% and above market share should be highly considered)
  • Testing on public OS beta’s is a must. Google Pixels, iOS Simulators as well as desktop web beta versions are always available for testing from day one
  • Mix device manufactures with the selected OS families to maximize the coverage (e.g. Samsung XX/Android 6.x, LG XX/Android 5.1.x, etc.)
  • Cover real environment testing to identify real-life glitches like mentioned above and as we see often in the market.

 

Happy Testing!

 

Complementing Cross-Browser Testing with Headless Unit Testing Solutions

Nothing new in the land of cross-browser testing. Selenium as the underlying API layer serves leading frameworks including WebDriverIO, Protractor (Angular based testing), NightWatchJS, RobotJS and many others.

For web application developers that require fast feedback capabilities post their code commit or bug resolution, there are various testing options. Some would quickly test manually on a set of local or cloud based VM’s, some will develop unit tests (qUnit etc.), but there are also very mature cross browser testing solutions that add more layers of coverage and insights in an automated and easy way.

In a recent eBook that I developed, I’m covering the 10 emerging cross-browser testing tools with a set of considerations around how to choose the right one or the right mix of them.

As can be seen in the 10 tools shown above, there is a mix of a unit as well as E2E functional testing tools mostly javascript based.

Developers who would like to include as part of their quick sanity post commit a validation of the load time it takes the site to load, can easily add this PhantomJS based test into their CI post build acceptance testing and get such visibility after each successful build – that, match the result with a benchmark and take decisions.

In a quick test that I ran on the NFL.com website, I was able to not only detect a slow load of 10sec. but I also identified a long set of errors while the page is loaded.

Another powerful capability tools like PhantomJS can offer is the ability to both capture a specific rendering of a web page by a pre-defined viewport, as well as the ability to generate a page HAR file for network traffic analysis (I am aware that it is not the newest tool, and that Goole already provides a newer version, but still this is a valuable open-source free tool that can help add coverage capabilities to any web development team).

So if as an example, the load time with errors above turns on a red light regarding that site, with 2 simple tests that BTW PhantomJS provides as their starting kit in GIT, the developer can address the above 2 use cases of HAR file generation as well as page rendering screenshot.

The result of the above snippet is the screenshot below:

The HAR file creation that is based on the following GIT code sample will result in the following (I am using the google add-on HTTP Archive Viewer for Chrome, it can be done simply with other HAR viewers as well):

Bottom line

You can download my latest eBook and learn more, but in general – leverage both unit testing powerful tools, as well as traditional E2E tests, hence they do complement each other and add their unique value – And it’s Free!

Happy Testing!

Recent Web Browser Quality Related Innovations

Yea, I know that my blog title is mobiletestingblog, but that’s not a mistake in the title 🙂

There is no distinction anymore around which platform is used to consume content today, whether it’s a smartphone, tablet or a desktop browser when it comes to web apps.

If your company is developing a web app or responsive website, these sites ought to be tested thoroughly against all of the above platforms. The majority of web traffic BTW today is coming from mobile devices.

In general, it is good to know that from a desktop browser market share, there are less familiar players such as UC browser by Alli-baba and Samsung Internet browsers that hold a nice chunk of the market (globally) – so, avoiding them as  part of your test coverage matrix might not be a good strategy.

Sourcehttp://gs.statcounter.com/browser-market-share

In general, the below would be the formula for web testing that I would recommend these days, however if from a web traffic analysis and supported geographies you have a requirement to target China, Europe, and others – then the above metrics should be added to the mix either in addition to the below, or as an adjustment.

With that in mind, I wanted to highlight in this post some recent web specific tools that are out there, free and can be extremely useful for both developers and testers.

In Google Chrome 59 (Beta is already available today!), Google is introducing new code coverage built-in tools that can allow both developers and testers to record the screen activities and report back in a nice dashboard how much of the site content (javascript, and more) was actually executed in an aim to optimize the website quality, performance and much more.

From a user perspective, they only need to enable the Code Coverage option from within the developer tools in Chrome, so it is added under the Sources menu option as seen in the below

Once that is done, simply start capturing the code coverage by clicking on the Record button to get an output like the below – simple, valuable, and unfortunately only available as free and built-in solution within this browser compared to FireFox/Safari and others 😦

I went and used this new tool on Geico.com responsive site and nearly completed the most common transaction of querying for a new car insurance. At the end of the recording, i received the below chart that as you’ll see – shows a usage of only ~60% of the site JavaScript code in this journey.

When drilling down deeper to a specific .JS source file, you can see a highlighted source with Green/Red where it is actually used and unused – this is what your web developers need to see and optimize wisely.

Let’s see a key feature that was recently introduced in FireFox also, and cab be useful for both Dev and testers.

2 weeks ago, Mozilla released FireFox 53 that is their 1st step in a new project called Quantum, that aims to enhance performance, stability and more.

Among the innovations in that release are compact themes, usability features like reading time for the page, new permission model (see below), faster performance and few other bug fixes for stability.

 

Detailed release notes on FF 53 can be found here: https://www.mozilla.org/en-US/firefox/53.0/releasenotes/.

In addition to the newly introduced features, and if you’re not aware of – FireFox offer quite useful developer tools including object inspector, performance monitor, debugger and network monitor that can also enhance your overall web Dev and Test activities (see example below)

Performance Monitoring Tools From Within FireFox Developer Tools

Network Monitoring Options From Within FireFox Developer Tools

Bottom Line

With Chrome and FireFox being the leading Desktop and Mobile browsers, it is very important for web teams to continuously monitor the early releases from Google and Mozilla, and as the 1st Beta or Dev branches are available to validate – Do It. This can not only reveal earlier regressions but might also as mentioned in this blog, offer you some new productivity tools that can increase the value to your overall Dev and Test activities.

Introducing Reporting Test Driven Development (RTDD)

In the era of “[.. ] Driven Development” trends like BDD, TDD, and ATDD it is also important to realize the end goal of testing, and that’s the quality analysis phase.

In many of my engagements with customers, and also from my personal practitioner experience I constantly hear the following pains:

  1. Test executions are not contextually broken, therefore are too long to analyze and triage
  2. Planning test executions based on trends, experience, and insights is a challenge – e.g. which tests are finding more bugs than the other?
  3. Dealing with flaky tests is an ongoing pain especially around mobile apps and platforms
  4. On-Demand quality dashboards that reflect the app quality per CI Job, Per app build, Per functionality tested area etc.

 

Introducing Reporting Test Driven Development (RTDD)

As an aim to address the above pains, that I’m sure are not the only related ones, I came to an understanding, that if Agile/DevOps teams start thinking about their test authoring and implementation with the end-in-mind (that is the Test Reports) they can collect the value at the end of each test cycle as well as prior during the test planning phase.

When teams can leverage a test design pattern that assigns their tests with custom Contextual Tags that wrap an entire test execution or a single test scenario with annotations like “Regression“, “Login“, “Search” and so forth – suddenly the test suites are better structured, easily maintained and can be either included/excluded and filtered through at the end of execution.

In addition, when the entire suite is customized by tags and annotations, management teams can easily retrieve on-demand quality dashboard and be up to date with any given software iteration.

Finally, developers that get the defect reports post executions, can easily filter and drill down into the root cause in an easier and more efficient manner.

If you think about the above, the use of annotations as a method to manage test execution and filter them is not a new concept.

TestNG Annotations with Selenium Example (source: Guru99)

As seen above, there are supported ways to tag specific tests by their priority, it is just a matter of thinking about such tags from the beginning.

Doing reverse engineering to a large test suite is painful, hard to justify and most often too late since the product by then is already out there and the teams are left to struggle with the 4 mentioned consequences from above.

RTDD is all about putting structure, governance, and advanced capabilities into your test automation factory.

If we examine the following table that divides various tags by 3 levels, it can serve as 1 reference that can be immediately used either through built-in tagging and annotation coming from TestNG or other reporting solutions.

As can be seen in the above table, think about an existing test suite that you recently developed. Now, think about the exact test suite that is tag-based according to the above 3 categories:

  1. Execution level tags
    1. This tag can encapsulate the entire build or CI JOB-related testing activities, or it can differentiate the tests by the test framework in which you developed the scripts. That’s the highest classification level of tags that you would use.
  2. Test suite level tags
    1. This is where you start breaking your test factory according to more specific identifiers like your mobile environment, the high-level functionality under test etc.
  3. Logical test level tags
    1. These are the most granular test tags identifiers that you would want to define per each of your test logical steps to make it easy to filter upon, triage failures, and plan ongoing regressions based on code changes.

As a reference implementation for an RTDD solution in addition to the basic TestNG implementation that can be very powerful if being used correctly with its listeners, pre-defined tags and more,  I would like to refer you to an open-source reporting SDK that enables you to do exactly what is mentioned in the above post.

When using such SDK with your mobile or responsive web test suites, you achieve both, the dashboards as seen below as well as a fast defect resolution that drills down by both Test case and Platform under test

Code Sample: Using Geico RWD Site with Reporting TDD SDK [Source: My Personal GIT)

 

Digital Dashboard Example With Predefined ContextTags (source: Perfecto)

 

Bottom Line

What I have documented above, should allow both managers, test automation engineers, and developers of UI/Unit and other CI related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as I’ve demonstrated above, can allow them to achieve the following outcomes:

  • Better structured test scenarios and test suites
  • Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
  • Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
  • Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
  • Eliminate flaky tests through high-quality visibility into failures

The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.

Happy Testing (as always)!

Criteria’s for Choosing The Right Open-Source Test Automation Tools

I presented last night at a local Boston meetup hosted by BlazeMeter a session together with my colleague Amir Rozenberg.

The subject was the shift from legacy to open-source frameworks, the motivations behind and also the challenges of adopting open-source without a clear strategy especially in the digital space that includes 3 layers:

  1. Open source connectivity to a Lab
  2. Open-source and its test coverage capabilities (e.g. Can open-source framework support system level, visual analysis, real environment settings and more)
  3. open-source reporting and analysis capabilities.

During the session, Amir also presented an open-source BDD/Cucumber based test framework called Quantum (http://projectquantom.io)

Full presentation slides can be found here:

Happy Reading

Eran & Amir

Model-Based Testing and Test Impact Analysis

In my previous blogs and over the years, I already stated how complicated, demanding and challenging is the mobile space, therefore it seems obvious that there needs to be a structured method of building test automation and meeting test coverage goals for mobile apps.

While there are various tools and techniques, in this blog I would like to focus on a methodology that has been around for a while but was never adopted in a serious and scalable way by organizations due to the fact that it is extremely hard to accomplish, there are no sufficient tools out there that support it when it comes to non-proprietary open-source tools and more.

First things first, let’s define what is a Model-Based testing

Model-based testing (MBT) is an application for designing and optionally executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a System Under Test (SUT), or to represent testing strategies and a test environment.

mbt

Fig 1: MBT workflow example. Source: Wikipedia

In the context of mobile, if we think about an end-user workflow with an application it will usually start with a Login to the app, performing an action, going back, performing a secondary action and often even a 3rd action based on the previous output of the 2nd. Complex Ha

The importance of modeling a mobile application serves few business goals:

  1. App release velocity
  2. App testing coverage (use cases)
  3. App test automation coverage (%)
  4. Overall app quality
  5. Cross-team synchronization (Dev, Test, Business)

 

As already covered in an old blog  I wrote, mobile apps would behave differently, and support different functionality based on the platform they are running (E.g., not every iOS device support both Touch ID and/or 3D Touch gesture). Therefore, being able to not only model the app and generate the right test cases but also to match these tests across the different platforms can be a key to achieving many of the 1-5 goals above.

In the market, today there are various commercial tools that can assist in MBT like CA Agile Requirement Designer, Tricentis Tosca, and others.

Looking at an example provided by one of the commercial vendors in the market (Tricentis), it can show a common workflow around MBT. A team aims to test an application; therefore, they would scan it using the MBT tool to “learn” its use cases, capabilities, and other artifacts so they can stack these into a common repository that can serve the team to build test automation.

tosca

Fig 2: Tricentis Tosca MBTA tool

In Fig 2., Tricentis examines a web page to learn its entire options, objects, and other data related items. Once the app is scanned, it can be easily converted into a flow diagram that can serve as the basis for test automation scenarios.

With the above goals in mind, it is important to understand that having an MBT tool that serves the automation team is a good thing, but only if it increases the team efficiency, its release velocity, and the overall test automation coverage. If the output of such tool would be a long list of test cases that either does not cover the most important user flows, or it includes many duplicates than this wouldn’t serve the purpose of MBT but rather will delay existing processes, and add unnecessary work to teams that are already under pressure.

In addition to the above commercial tools, there is an older but free tool that allows Android MBT with robotium called MobiGuitar. This tool not just offers MBT capabilities but also code coverage driven by the generated test scripts.

A best practice in that regards would be to probably use an MBT tool that can generate all the application related artifacts that include the app object repository, the full set of use cases, and allow all of that to be exported to leading open-source test automation frameworks like Selenium, Appium, and others.

Mobile Specific MBT – Best Practices and Examples

Drilling down into a workflow that CA would recommend around MBT would look as follows – In reality, the below is easier said than done for a Mobile App compared to Web and Desktop:

  1. The business analysts will create the story using tools like CA Agile Requirement Designer or such (see below more examples)
  2. The story is then passed to an ALM tool (e.g.: CA Agile Central [formerly Rally], Jira, etc.) for project tracking
  3. Teams use the MBT tools to collaborate
    1. The automation engineer adds the automation code snippets to the nodes where needed or adds additional nodes for automation.
    2. The programmer updates the model for technical specs or more technical details.
    3. The Test Data engineer assigns test data to the model
  4. Changes to the story are synchronized with the ALM Tool
  5. Test cases are synchronized with the ALM Tool
  6. The programmer completes coding
  7. The code is promoted from Dev to QA
  8. Testing begins
    1. The tester uses the test cases with test data from MBT tools for manual test case execution
    2. The automation scripts with test data are executed for functional and regression testing

To learn more about efficient MBT solutions, practices please refer to these sources:

Mobile Testing: Difference Between BDD, ATDD/TDD

Last week I presented in the Joe Colantonio AutomationGuild online conference – Kudos to Joe for a great event!

ag-logo-small

Among multiple interesting questions that I got post my session,  like what is the best test coverage for mobile projects? how to design effective non-functional and performance testing in mobile and RWD?, I also got a question about the differences between BDD and ATDD.

My session was around an Open Source test automation framework called Quantum that supports cucumber BDD (Behavior Driven Development) and this obviously triggered the question.

Definition: BDD and ATDD

ATDD Acceptance Test Driven Development

Based on Wikipedia’s definition (referenced above), ATDD is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example,behavior-driven development (BDD),example-driven development (EDD), and support-driven development also called story test–driven development (SDD).

All these processes aid developers and testers in understanding the customer’s needs prior to implementation and allow customers to be able to converse in their own domain language.

ATDD is closely related to test-driven development (TDD). It differs by the emphasis on developer-tester-business customer collaboration. ATDD encompasses acceptance testing, but highlights writing acceptance tests before developers begin coding.

BDD Behavior Driven Development

Again, based on Wikipedia’s definition (referenced above), BDD is a software development process that emerged from test-driven development (TDD)Behavior-driven development combines the general techniques and principles of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software development and management teams with shared tools and a shared process to collaborate on software development.

Mobile Testing In the Context of BDD and ATDD

The way to look at the 2 agile like practices of BDD, ATDD, TDD is from the context of higher velocity and quality requirements.

Organizations are aiming to release faster to market, with great quality, sufficient test coverage and in the same time of course – meet the business goals and customer satisfaction. To achieve these goals, teams ought to be strongly collaborative from the very beginning app development and design stages.

Once organizations have the customer product requirements, and they can start developing through user stories, acceptance criteria’s and such the product & the tests several goals can be met:

  • High customer-vendor alignment == Customer satisfaction
  • Faster time to market, app is tested along the SDLC
  • Quality is in sync with customer needs and there are much less redundant tests
  • There are no Communication gaps or barriers between Dev, Test, Marketing, Management

 

Looking at the below example of a BDD-based test automation test code, it is very easy to understand the functionality and use cases under test, the desired test outcome.

quantum123

As can be seen in the screenshot above, the script installs and launches on an available Samsung device the TestApp.APK file performs a successful login and presses on a menu item. As a final step, it also performs a mobile visual validation to assure that the test also passes, and also as an automaton anchor, the test code reached the expected screen.

It is important to mention that the test framework, tools that can support both TDD, ATDD and BDD can be in many cases similar, and in our case above – one can still develop and test from a BDD or ATTD standpoint by using a Cucumber test automation framework (Cucumber, Quantum).

If we would compare the above functional use case, or as stated in the cucumber language “Scenario” to a scenario that would fit an ATDD based approach – we would most likely need to introduce the known “3 amigos” approach  –> three perspectives of customer (what problem are we trying to solve?), development (how might we solve this problem?), and testing (what about…).

 

Since a real ATDD best practice will determine a Gherkin like app scenario’s before the development even starts, the above BDD example will be a precondition test for the app development team to make sure that they develop against acceptance criteria that in our example is a successful app install and log in.

An additional example of an acceptance test that also involves a layer of login/Register that I can reference would like this:

effective-testing-practices-in-an-agile-environment-28-638

I can understand that confusion between BDD and ATDD since as mentioned above, they can look a lot like the same.

Bottom line, and as I responded to the event last week – both BDD/ATDD/TDD are methods to better sync the various counterparts involved in shipping a working product to the market, faster, with higher quality and with the right functionality that would meet the customer requirements. Implementing it using Gherkin method makes a lot of sense due to the easy alignment and common language these counterparts uses during the SDLC workflow.

Happy Testing!

What You Need To Know When Planning Your Test Lab in 2017

As we kick-off 2017, I am thrilled to release the most updated 6th edition of the Digital Test Coverage Index report, a guide to help you decide how to build your test lab. 2016 was an exciting year in the Digital space, and as usual, Q4 market movement is sure to impact 2017 development and testing plans. And it doesn’t appear that the market is slowing down, with continued innovation expected this year. In this post, I will summarize the key insights we saw last quarter, as well as few important things that are projected for 2017 that should be applied when building your test lab.

dtci

Key Takeaways

  • Beta OS versions remain an important aspect of your test coverage strategy. With Apple releasing 5 different minor versions of iOS 10 since it’s release in September 2016, iPhone/iOS 10 beta are a “must-include in your test lab” device/OS combination. On the browser side, Chrome and Firefox beta versions are also critical test targets for sustaining the quality of your mobile web/responsive websites.
  • The Android fragmentation trend is changing, with Google putting pressure on device manufacturers to keep pace with the latest OS versions. As evidence, we already see that Android 6.x has the greatest market share as of Q42016, with roughly 27%, followed by Android Lollipop. With Google releasing its first Android Pixel devices, the market is already starting to see a boost in Android 7 Nougat adoption which is expected to grow within Q12017 to between 2-5% market share.
  • Galaxy S7 and S7 Edge were a turning point for Samsung: Over the last year, Samsung has seen a revenue slowdown due, in part, to competition from both Apple and emerging Android manufacturers OnePlus, Xiaomi, and Huawei. With the launch of Samsung S7 & S7 Edge, the company is regaining its position. We can see in this edition of the Index (and the previous one,) that Samsung is the leading brand in many countries, which should impact the test coverage plans in Brazil, India, Netherlands, UK, Germany and U.S.
  • The mobile app engagement methods are evolving, with various enterprises counting on the mobile platform to drive more revenues and attract more users. We are seeing greater adoption of external application integration either through dedicated OS-level applications like the iOS iMessage or through other solutions like the Google app shortcuts that were recently introduced as part of Android 7.1. These changes represent a challenge from a testing perspective, since there is now additional outside-of-app dependencies that the Dev and QA teams need to manage.
  • Test Lab size is expected to slightly grow YoY as the market matures:   Looking at the annual growth projection below, we see a slight growth in the need for a 10, 25 and 32 device lab, based on new the devices that are being introduced into the market faster than old devices are retired. What we see is an annual introduction of around 15 leading devices per year with an average retirement of 5-7 per year (due to decreased usage, terminated support by vendor etc.). Integrating these numbers into the 30%-80% model would bring the annual growth as demonstrated in the following graph.

annual_growth

 

2017 Trends

As this is the first Index for 2017, here are the most important market events that will impact both Dev and QA teams in the digital space, in the categories of Mobile, Web or both.

New Players

The most significant player to joins the mobile space in 2017 is Nokia. After struggling for many years to become a relevant vendor, and being unsuccessful under the Windows Phone brand, Nokia is now back in the game with a new series of Android-based devices that are supposed to be introduced during MWC 2017. A second player that is going to penetrate the mobile market is Microsoft who is supposed to introduce the first Microsoft Surface Phone during H1 2017.

Innovative Technologies

During 2017 we will definitely continue to see more IoT devices, smartwatches, and additional features coming from both Google and Apple, in the mobile, automotive and smart home markets. In addition, we might see the first foldable touch smartphone released to the market by Samsung under the name “Samsung X”. In addition, we should see a growing trend of external App interfaces in various forms such as bots, iMessages, App Shortcuts and Voice based features. The market refers to these trends as result of “App Fatigue” which is causing organizations to innovate and change the way their end-users are interacting with the apps and consuming data. From a testing perspective, this is obviously a change from existing methods and will require re-thinking and new development of test cases. In a recent blog, I addressed the above – feel free to read more about it here.

Key Device Launches to Consider for an Updated Test Lab

Most of the below can be seen in the market calendar for 2017, but the highlights are listed here as well:

  • Samsung S8/S8 Edge flagship devices from Samsung are due by February 2017 and should be the successors of the highly successful S7/S7 Edge devices
  • iPhone 8/iPhone 8 Plus together with iOS 11 launch in MID-September 2017 will mark the 10th anniversary for the Apple iPhone series. This launch is expected to be a groundbreaking one for iOS users.
  • Huawei Mate 9/Mate 9 Pro, and in general, the Huawei smartphone portfolio is continuing its global growth. 2017 should continue the growth trend both in China and India, but also as seen in this Index report in many European countries where we are already seeing devices like Huawei P8, P9, and others in use.

From a web perspective, we are not going to see any major surprises from the leading browsers like Chrome, FireFox, and Safari. However, from Microsoft Edge browser, we expect a significant market share uptick as more and more users adopt Windows 10 and abandon legacy Windows OS machines.

cal2017

 

In the Index report, you may find all the information necessary to better plan for 2017, as well as market calendars for both mobile and the web, plus a rich collection of insights and takeaways. DOWNLOAD HERE.

Happy Testing in 2017!