In the era of “[.. ] Driven Development” trends like BDD, TDD, and ATDD it is also important to realize the end goal of testing, and that’s the quality analysis phase.
In many of my engagements with customers, and also from my personal practitioner experience I constantly hear the following pains:
- Test executions are not contextually broken, therefore are too long to analyze and triage
- Planning test executions based on trends, experience, and insights is a challenge – e.g. which tests are finding more bugs than the other?
- Dealing with flaky tests is an ongoing pain especially around mobile apps and platforms
- On-Demand quality dashboards that reflect the app quality per CI Job, Per app build, Per functionality tested area etc.
Introducing Reporting Test Driven Development (RTDD)
As an aim to address the above pains, that I’m sure are not the only related ones, I came to an understanding, that if Agile/DevOps teams start thinking about their test authoring and implementation with the end-in-mind (that is the Test Reports) they can collect the value at the end of each test cycle as well as prior during the test planning phase.
When teams can leverage a test design pattern that assigns their tests with custom Contextual Tags that wrap an entire test execution or a single test scenario with annotations like “Regression“, “Login“, “Search” and so forth – suddenly the test suites are better structured, easily maintained and can be either included/excluded and filtered through at the end of execution.
In addition, when the entire suite is customized by tags and annotations, management teams can easily retrieve on-demand quality dashboard and be up to date with any given software iteration.
Finally, developers that get the defect reports post executions, can easily filter and drill down into the root cause in an easier and more efficient manner.
If you think about the above, the use of annotations as a method to manage test execution and filter them is not a new concept.
TestNG Annotations with Selenium Example (source: Guru99)
As seen above, there are supported ways to tag specific tests by their priority, it is just a matter of thinking about such tags from the beginning.
Doing reverse engineering to a large test suite is painful, hard to justify and most often too late since the product by then is already out there and the teams are left to struggle with the 4 mentioned consequences from above.
RTDD is all about putting structure, governance, and advanced capabilities into your test automation factory.
If we examine the following table that divides various tags by 3 levels, it can serve as 1 reference that can be immediately used either through built-in tagging and annotation coming from TestNG or other reporting solutions.
As can be seen in the above table, think about an existing test suite that you recently developed. Now, think about the exact test suite that is tag-based according to the above 3 categories:
- Execution level tags
- This tag can encapsulate the entire build or CI JOB-related testing activities, or it can differentiate the tests by the test framework in which you developed the scripts. That’s the highest classification level of tags that you would use.
- Test suite level tags
- This is where you start breaking your test factory according to more specific identifiers like your mobile environment, the high-level functionality under test etc.
- Logical test level tags
- These are the most granular test tags identifiers that you would want to define per each of your test logical steps to make it easy to filter upon, triage failures, and plan ongoing regressions based on code changes.
As a reference implementation for an RTDD solution in addition to the basic TestNG implementation that can be very powerful if being used correctly with its listeners, pre-defined tags and more, I would like to refer you to an open-source reporting SDK that enables you to do exactly what is mentioned in the above post.
When using such SDK with your mobile or responsive web test suites, you achieve both, the dashboards as seen below as well as a fast defect resolution that drills down by both Test case and Platform under test
Code Sample: Using Geico RWD Site with Reporting TDD SDK [Source: My Personal GIT)
Digital Dashboard Example With Predefined ContextTags (source: Perfecto)
Bottom Line
What I have documented above, should allow both managers, test automation engineers, and developers of UI/Unit and other CI related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as I’ve demonstrated above, can allow them to achieve the following outcomes:
- Better structured test scenarios and test suites
- Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
- Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
- Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
- Eliminate flaky tests through high-quality visibility into failures
The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.
Happy Testing (as always)!
Software Testing services perfectly engineered for your Continuous Delivery and DevOps success. We not only build quality but also create value by preventing defects before they occur. Know more: http://www.softcrylic.com/test-engineering-services/?utm_source=Pro_Links&utm_campaign=Backlink&utm_medium=Link&utm_content=Text