Most Recommended Software Testing Books to Read in 2020 and Beyond

As we kick of a new decade of software development and testing, and as digital continuous to challenge test automation engineers that are trying to fit the testing within shorter then ever cycles, here are the top recommended software testing books to consider – the order isn’t the priority, they are all awesome books and equally recommended:

#1 Agile Testing – Lisa Crispin and Janet Gregory

Readers will come away from this book understanding

  • How to get testers engaged in agile development
  • Where testers and QA managers fit on an agile team
  • What to look for when hiring an agile tester
  • How to transition from a traditional cycle to agile development
  • How to complete testing activities in short iterations
  • How to use tests to successfully guide development
  • How to overcome barriers to test automation

#2 More Agile Testing – Lisa Crispin and Janet Gregory

Main learning objectives from the book:

  • How to clarify testing activities within the team
  • Ways to collaborate with business experts to identify valuable features and deliver the right capabilities
  • How to design automated tests for superior reliability and easier maintenance
  • How agile team members can improve and expand their testing skills
  • How to plan “just enough,” balancing small increments with larger feature sets and the entire system
  • How to use testing to identify and mitigate risks associated with your current agile processes and to prevent defects
  • How to address challenges within your product or organizational context
  • How to perform exploratory testing using “personas” and “tours”
  • Exploratory testing approaches that engage the whole team, using test charters with session- and thread-based techniques
  • How to bring new agile testers up to speed quickly–without overwhelming them

#3 Hands on Mobile App Testing – Daniel Knott

Readers will learn how to

  • Establish your optimal mobile test and launch strategy
  • Create tests that reflect your customers, data networks, devices, and business models
  • Choose and implement the best Android and iOS testing tools
  • Automate testing while ensuring comprehensive coverage
  • Master both functional and nonfunctional approaches to testing
  • Address mobile’s rapid release cycles
  • Test on emulators, simulators, and actual devices
  • Test native, hybrid, and Web mobile apps
  • Gain value from crowd and cloud testing (and understand their limitations)
  • Test database access and local storage
  • Drive value from testing throughout your app life-cycle
  • Start testing wearables, connected homes/cars, and Internet of Things devices

#4 Enterprise Continuous Testing – Wolfgang Platz

Enterprise Continuous Testing: Transforming Testing for Agile and DevOps introduces a Continuous Testing strategy that helps enterprises accelerate and prioritize testing to meet the needs of fast-paced Agile and DevOps initiatives. Software testing has traditionally been the enemy of speed and innovation—a slow, costly process that delays releases while delivering questionable business value. This new strategy helps you test smarter, so testing provides rapid insight into what matters most to the business.

#5 Continuous Testing for DevOps Professionals – Eran Kinsbruner and Leading Market Experts

Continuous Testing for DevOps Professionals is the definitive guide for DevOps teams and covers the best practices required to excel at Continuous Testing (CT) at each step of the DevOps pipeline. It was developed in collaboration with top industry experts from across the DevOps domain from leading companies such as CloudBees, Tricentis, Testim.io, Test.ai, Perfecto, and many more.The book is aimed at all DevOps practitioners, including software developers, testers, operations managers, and IT/business executives

#6 AccelerateThe Science of Lean Software and DevOps, Nicole Forsgren

How can we apply technology to drive business value? For years, we’ve been told that the performance of software delivery teams doesn’t matter―that it can’t provide a competitive advantage to our companies. Through four years of groundbreaking research to include data collected from the State of DevOps reports conducted with Puppet, Dr. Nicole Forsgren, Jez Humble, and Gene Kim set out to find a way to measure software delivery performance―and what drives it―using rigorous statistical methods. This book presents both the findings and the science behind that research, making the information accessible for readers to apply in their own organizations.

#7 Complete Guide to Test Automation, Arnon Axelrod

  • Know the real value to be expected from test automation
  • Discover the key traits that will make your test automation project succeed
  • Be aware of the different considerations to take into account when planning automated tests vs. manual tests
  • Determine who should implement the tests and the implications of this decision
  • Architect the test project and fit it to the architecture of the tested application
  • Design and implement highly reliable automated tests
  • Begin gaining value from test automation earlier
  • Integrate test automation into the business processes of the development team
  • Leverage test automation to improve your organization’s performance and quality, even without formal authority
  • Understand how different types of automated tests will fit into your testing strategy, including unit testing, load and performance testing, visual testing, and more

#8 The DevOps Handbook, Gene Kim, Jez Humble, Patrick Debois, John Willis

More than ever, the effective management of technology is critical for business competitiveness. For decades, technology leaders have struggled to balance agility, reliability, and security. The consequences of failure have never been greater―whether it’s the healthcare.gov debacle, cardholder data breaches, or missing the boat with Big Data in the cloud. And yet, high performers using DevOps principles, such as Google, Amazon, Facebook, Etsy, and Netflix, are routinely and reliably deploying code into production hundreds, or even thousands, of times per day. Following in the footsteps of The Phoenix Project, The DevOps Handbook shows leaders how to replicate these incredible outcomes, by showing how to integrate Product Management, Development, QA, IT Operations, and Information Security to elevate your company and win in the marketplace

#9 The Digital Quality Handbook – Eran Kinsbruner and Industry Experts

As mobile and web technologies continue to expand and basically drives large organizational business in virtually every vertical or industry, it is critical to understand how to take existing release practices for mobile and web apps to the next level, including software development life cycle (SDLC), tools, quality, etc. Organizations which are already enjoying the power of digital are still struggling with various challenges that can be related to many factors, such as:

  • SDLC and processes maturity
  • Expanding test coverage to include more non-functional testing, user condition testing, etc.
  • Coping with existing limitations of open source tools and frameworks
  • Sustaining correctly sized and up-to-date mobile test labs
  • Getting proper quality insights upon each test cycle prior and post production
  • Branching wisely cross-platform and cross-feature test suites

#10 Specification by Example , How Successful Teams Deliver the Right Software, Gojko Adzic

Gojko has a list of great books, this is one of the great ones, but check out other of his books as well.

Specification by Example is an emerging practice for creating software based on realistic examples, bridging the communication gap between business stakeholders and the dev teams building the software. In this book, author Gojko Adzic distills interviews with successful teams worldwide, sharing how they specify, develop, and deliver software, without defects, in short iterative delivery cycles.

#11 Clean CodeA Handbook of Agile Software Craftsmanship, Robert C. Martin

Readers will come away from this book understanding

  • How to tell the difference between good and bad code
  • How to write good code and how to transform bad code into good code
  • How to create good names, good functions, good objects, and good classes
  • How to format code for maximum readability
  • How to implement complete error handling without obscuring code logic
  • How to unit test and practice test-driven development

#12 Continuous DeliveryReliable Software Releases through Build, Test, and Deployment Automation, Jez Humble and David Farley

The authors introduce state-of-the-art techniques, including automated infrastructure management and data migration, and the use of virtualization. For each, they review key issues, identify best practices, and demonstrate how to mitigate risks. Coverage includes

  • Automating all facets of building, integrating, testing, and deploying software
  • Implementing deployment pipelines at team and organizational levels
  • Improving collaboration between developers, testers, and operations
  • Developing features incrementally on large and distributed teams
  • Implementing an effective configuration management strategy
  • Automating acceptance testing, from analysis to implementation
  • Testing capacity and other non-functional requirements
  • Implementing continuous deployment and zero-downtime releases
  • Managing infrastructure, data, components and dependencies
  • Navigating risk management, compliance, and auditing

#13 The Guide to Software Testability, Ash Winter and Rob Meaney

Learn practical insights on how testability can help bring teams together to observe, control and understand the systems they build. Enabling them to better meet customer needs, achieve a transparent level of quality and predictability of delivery.

#14 A Practical Guide to Testing in DevOps, Katrina Clokie

A Practical Guide to Testing in DevOps offers direction and advice to anyone involved in testing in a DevOps environment

#15 Test Automation University, Angie Jones and Applitools

While not a book, a great shout out to my colleague, friend and co-author in one of my above mentioned books Angie Jones for launching, leading and building the Test Automation University. This is obviously an additional online resource that complements books and other written material.

Summary

I am sure that there are plenty of other great and practical books, but i went with this list as a start. If you feel that there must be an additional up to date book, reach out to me, and I will be more than happy to add to this list.

Happy Reading!

 

 

Core Pillars of Stable Test Automation Suites

For those who follow me, you already know that to build a continuous testing suites that scales, and that lasts for more than few software iterations, teams must follow rigorous processes.

When i say teams – I mean developers, test engineers and business testers.

 

 

This post isn’t about balancing the work load between personas, but rather on continuously sharpening the test suites value to the entire team.

As we close a decade and enter a new one, and with some cool advancements in the technologies that include AI/ML but also challenging platforms like PWAs, Foldables, 5G and IOT – it’s the perfect time to re-evaluate our testing management and maintenance processes.

What Causes Test Instabilities?

Especially across the digital landscape, there are quite a few recurring patterns that causes test instabilities or flakiness.

  • There are the scripts and frameworks that we use – in this category we can list object locators strategy, popups (security, system), coding skills, timing and steps synchronizations and more.
  • There are backend issues around services, networking and up to date test data and environments.
  • Lab related issues that includes platforms being unavailable or poorly setup for testing (screen is locked, network is disconnected, app isn’t installed, device isn’t in ready state mode)
  • Lastly, Orchestration – This is a tricky root cause since there are not that many early warnings to these kind of issues up until you execute and scale up. Not having sufficient platform to test against, or knowing that a platform is in use is often hard to find (but – not impossible :))

In addition to the above 4 major categories, there are also of course the software development dynamics that includes advancements in the product, market changes that impact your test scripts and infrastructure, and also the external dependencies that you cannot always control (network carriers, 3rd party services that you depend on, etc.).

Test Automation Certification for Value vs. ROI

While there is no 100% safety net for the above mentioned RCAs (root cause analysis), but there are few steps that teams can take and enforce to minimize the risks.

My few recommended tips that should be examined every 2-3 software releases/iterations are:

  1. Think with the end in mind when you write a test automation script. Will the test provide value more than just once? Which test suites/types such test will fit (regression, functional, unit, etc.)
  2. Consider the maintainability of such test script over time, how difficult will that be? and will it be worth the hassle?
  3. Certify the test script prior to integrating it into any pipeline. Certification should address the stability of the test across multiple platforms (only works on my machine is not an option), test execution duration (Less than 5 minutes please), If it is a test that detected defects it needs to be scored somehow as a higher value test that others.
  4. Monitor and measure the impact of your test suites and correct not longer than once a quarter!

The above image is an example of an external dependency that can impact the performance of your app. Assuming you test your mobile app against real cellular networks, you should acknowledge that not every carrier behave and performs the same, and it may impact your overall results. Knowing, measuring and taking this into account can save false negatives and other wrong speculations.

Lastly, and as this section implies – it is time we stop measuring test automation ROI and $ but clear value and risk mitigation to the business. Test automation value is by far more wide than the legacy ROI metrics that were simply covering for cost of test creation and execution regardless of what the test executions actually results in.

I have recently delivered a session about test automation stability for the OnlineTestConf. You can view the recording and slides here

Power Of Analytics

While a lot of the above might be clear to many practitioners, what i often see in the market is that the time element prevents teams from actually seeing what’s wring with their testing infrastructures. Teams invest a lot of money, resources and of course time into building a cool set of testing suites, however once their done, the auto pilot steps in, and until something is broken, these suites remains blind spots. This is again, a legacy practice, and in the modern DevOps/Agile/Continuous Testing era, teams ought to monitor and gain maximum visibility into their test code continuously!

As the above visual suggest – only when you look into the reports in a smart way, you can improve quality, deliver fast and up to date feedback and Yes – get some ROI (in a form of value) from your investment.

Imagine you can upon demand answer questions like –

  • what are my most problematic test scenarios?
  • what are my most flaky or buggy platforms (mobile/web)?
  • which job within my CI is the longest?
  • Which test cases must be retired or be placed in a “blocked” bucket for maintenance (due to popups, objects, other)?

Knowing the above answers upon and perhaps prior to clicking on the RUN button could be a major productivity boost to your entire cycle, wouldn’t it?

The below are 2 real life example from the Perfecto tool, but you can think about similar implementation as well – In the below, based on AI and smart algorithms the tool scans the test reports for pre-defined patterns like popups, elements not found, platforms in use and others, but also allows teams to customize these and add unique and relevant RCAs on their own to better slice and dice the test data.

Bottom Line

As we wrap a decade of software development and testing, it’s perhaps time that we take our test automation a level up and inject smart algorithms, processes and more so we are always on top of what we build, and improve whenever there is a need.

Happy Testing!

Resolving The Quality Visibility of Continuous Testing Across The DevOps Pipeline Environments

Guest Blog Post by: Tzvika Shahaf, Director of Product Management at Perfecto & A Digital Reporting and Analysis Expert

Intro

One of the DevOps challenges in today’s journey for high product release velocity and quality, is to keep track of the CI Jobs/Test stability and health over different environments.

Basically, this is one of the top reasons, bugs slips to production. In other words, lack of visibility into the DevOps delivery pipeline.

Real Life Example

Recently, I’ve met a director of DevOps in a big US-based enterprise who shared with me one of his challenge in this respect.

At the beginning of our meeting he indicated that his organization’s testing activity lacks a view for the feature branches that are under each team’s responsibility.  This gap creates blind spots, where even the release manager is struggling to assemble a reliable picture of the quality status.

The release manager and the QA lead are responsible to verify that after each and every build cycle execution, the failures that occurred are not critical and accordingly approve the merge to a master branch (while also issuing a defect in Jira for bugs/issues that weren’t fixed). The most relevant view for these personas is a suite level list report as the QA lead is still busy with drill down activity to the individual test report level as he is interested also in understanding the failure root cause analysis (RCA).

As part of the triage process, the team is looking to understand the trending history of a Job to see under each build what was the overall test result status. The team is mainly interested in understanding if the issue is an existing defect or a new one. In addition, they look for an option to comment during the triage process (think about it as an offline action taken post the execution).

Focusing on the Problem

So far so good, right? But here’s the problem: the work conducted by each team is not siloed in the teams CI view. There’s no data aggregation to display an holistic overview of the CI pipeline health/trend.

Each team is working on different CI branch and the other teams during the SDLC have no visibility to what happened before/happens now.

Even when there’s a bug, the teams will be required to issue the defect to a different Jira project – so the bug fixing process adding more inefficiency, hence time to the release process.

When the process is broken as identified above, each new functionality on the system test level  or stage, is being merged while not all failures are being inspected (lack of visibility from within the CI).

Jenkins will ignore the system test results and merge even if there’s a failure.

The Right Approach to Solving The problem

The desired visibility from a DevOps perspective is to cover the Jenkins Job/build directly form the testing Dashboard across all branches in order to understand what was changed in that specific build that failed the tests:  which changes in the source code were made? Etc.

What if these teams had a CI overview that capture all testing data that includes:

  1. Project name & version
  2. Job name /Build number
  3. Feature Branch name or Environment description (Dev, Staging, Production etc.)
  4. Scrum Team name  – Optional
  5. Product type – optional

Obviously, Items 1-3 are a MUST in such a solution, since when displayed in the dashboard UI, teams gain maximum visibility into the relevant module the user is referring to when a bug is found,  and this is also part of the standard DevOps process as well. Double clicking on the visibility and efficiency point,  such options can significantly narrow the view of the entire dashboard for each team lead/member , and help them focus only on their relevant feature branches.

When the QA lead reviews the CI dashboard he/she can mark a group of tests and name the actual failure reason, that can sometimes be a bug, a device/environment issue or other.

Feel free to reach out to me if you run into such issues, or have any insights or comment on this point of view – Tzvika’s Twitter Handle

Thank You

Tzvika Shahaf

Eliminating Mobile Test Automation Flakiness and More

Mobile testing by definition is an unstable, flaky and unpredictable activity.

When you think you covered all corners and created a “stable” environment, still, your test cycle often get stuck due to 1 or few items.

In this post, I’ll try to identify some of the key root causes for test automation flakiness, and suggest some preventive actions to eliminate them.

What Can Block Your Test Automation Flow?

From an ongoing experience, the key items that often block test automation of mobile apps are the following:

  • Popups – security, available OS upgrades, login issues, etc.
  • Ready state of DUTs – test meets device in a wrong state
  • Environment – device battery level, network connectivity, etc.
  • Tools and Tets Framework fit – Are you using the right tool for the job?
  • Use of the “right” objects identifiers, POM, automation best practices
  • Automation at Scale – what to automate, on what platforms?

All of the above contribute in one way or the other to the end-to-end test automation execution.

We can divide the above 6 bullets into 2 sections:

  1. Environment
  2. Best Practices

Solving The Environment Factor in Mobile Test Automation

In order to address the test environment contribution to test flakiness, engineers need to have full control over the environment they operate in.

If the test environment and the devices under test (DUT) are not fully managed, controlled and secured the entire operation is at risk. In addition, when we state the term “Test Environment Readiness” it should reflect the following:

  1. Devices are always cleaned up prior to the test execution or are in a known “state”/baseline to the developers and testers
  2. If there are repetitive known popups such as security permissions, install/uninstall popups, OS upgrades, or other app-specific popups, they should be accounted for either in the pre-requisites of the test or should be prevented proactively prior to the execution.
  3. Network stability often is a key for unstable testing – engineers need to make sure that the devices are connected to WiFi or cellular network prior to testing execution start. This can be done either as a pre-requisite validation of the network or through a more generic environment monitoring.

 

Following Best Practices

In previous blogs, I addressed the importance of selecting the right testing frameworks, IDEs as well as leveraging the cloud as part of test automation at scale. After eliminating the risks of the test environment in the above section, it is important to make sure that both developers and test automation engineers follow proper guidelines and practices for their test automation workflow.

Since testing shifted left towards the development team, it is important that both dev and test align on few things:

  1. What to automate?
  2. On what platforms to test?
  3. How to automate (best practices)?
  4. Which tools should be used to automate?
  5. What goes into CI, and what is left outside?
  6. What is the role of Manual and Exploratory testing in the overall cycle?
  7. What is the role of Non-Functional testing in the cycle?

The above points (partial list)  covers some fundamental questions that each individual should be asking continuously to assure that his team is heading in the right direction.

Each of the above bullets can be attributed to at list one if not many best practices.

  1. To address the key question on what to automate, here’s a great tool (see screenshot below) provided by Angie Jones. In her suggested tool, each test scenario should be validated through some kind of metric that adds to a score. The highest scored test cases will be great candidates for automation, while the lowest ones obviously can be skipped.
  2. To address the 2nd question on platform selection, teams should monitor their web and mobile ongoing traffic, perform market research and learn from existing market reports/guides that addresses test coverage.
  3. Answering the question “How to automate” is a big one :). There is more than 1 thing to keep in mind, but in general – automation should be something repetitive, stable, time and cost efficient – if there is something preventing one or more of these objectives, it’s a sign that you’re not following best practices. Some best practices can be around using proper object identifiers, others can reflect building the automation framework with proper tags and catches so when something breaks it is easy to address, and more.
  4. The question around tools and test frameworks again is a big one. Answering it right depends on the project requirements and complexity, the application type (native, web, responsive, PWA), test types (functional, non-functional, unit). In the end, it is important to have a mix of tools that can “play” nicely together and provide unique value without stepping on each other.
  5. Tests that enter CI should be picked very carefully. Here, it is not about the quantity of the tests but about the quality, stability, and value these tests can bring focusing on fast feedback. If a test requires heavy env. setup is flaky by nature, takes too much time to run, it might not be wise to include it in the CI.
  6. Addressing the various testing types in Qs 6 and 7 – it depends on the project objectives, however, there is a clear value for each of the following tests in  mobile app quality assurance:
    1. Accessibility and performance provide key UX indicators about your app and should be automated as much as possible
    2. Security testing is a common oversight by many teams and should be covered through various code analysis, OWASP validations and more.
    3. Exploratory, manual and crowd testing provide another layer of test coverage and insights into your overall app quality, hence, should be in your test plan and divided throughout the DevOps cycle.

Happy Testing

The Rise of Automation: Why Coding is Becoming a Job for Everyone

Guest Blog by Wendy Dessler

Photo by Alex Knight on Unsplash

According to USA Today, as many as 73 million jobs may not exist by 2030; however, we could see benefits like an increase in productivity and economic growth “counteract” job losses. But, will it? Will it not? That is a question several economists mull over.

What we can say, given our increasing reliance and incorporation of technology, we predict that coding will be a skillset job seekers will list on their applications—along with the usual: typing, Excel-savvy, multitasking, you name it.

But does everyone have to be proficient in coding? Are all of our jobs reliant on how well we know to insert code? Read on to find out!

Automation Affects the Job Market   

Because of automation, we are seeing more self-checkouts in grocery stores and convenience stores. Even some fast food stores—like Wendy’s—are planning to install not one but more than 1,000 self-ordering kiosks in its stores.

The Why

The truth is, automation cuts market costs and saves companies a lot of money. Think about it: unlike an employee, self-ordering kiosks and self-checkouts, you name it aren’t going to get tired; they won’t come in late to work, and the chances of getting an order wrong is a lot lower. In other words, it’s safe to say automation minimizes human error.

More Automation Threatens Certain Industries

According to CNN Money, cashiers and toll booth operators, drivers, and fast food jobs are all at risk of becoming fully automated; however, other jobs like nurses, doctors, youth sports coaches, hairstylists and cosmetologists, songwriters, and social workers are far from being at risk.

Pretty much any job that incorporates a high degree of “humanness”—like empathizing with the client or using emotional nuances to do a better job—are in the clear.

 

But what how are tech industries faring? How is automation affecting the way developers and quality assurance do their jobs?  

A Shift in the Tech Sector

Thanks to automation, we even see a shift in the tech sector. DevOps teams, for instance, are responsible for deploying automation (not to mention making sure the site is up and the server doesn’t crash, etc.).

While those in the quality assurance sectors are relying more and more on coding to test products before they are distributed.

 

In some instances, they even do some degree of auto-testing, and some may use error monitoring software to rule out bugs, which you can learn more about at https://stackify.com/error-monitoring/.  

But Where Does Coding Factor In?

The age of automation is upon us, and with that, employees and job seekers can strengthen their skill sets by learning how to code. However—and this is a big, however—unless you work in the tech sector, you probably don’t have to be a coding expert.

 

Even Low-Risk Jobs Can Benefit from Knowing Some Coding Basics

As we mentioned, songwriters aren’t at risk at automation (as of yet), but that doesn’t mean they can’t benefit from learning some code and using automation in their song recording processes.

Long story short, learning how to code can only make you are a  more in-demand employee. But, like we said, unless your job is tech-related, it isn’t an end-all-be-all.

Photo by John Schnobrich on Unsplash

Final Thoughts

Automation doesn’t have to because for concern. Remember, we’ve gone through several economic transitions—such as the Industrial Revolution or even the shift from letter writing to email—that have changed how we do business (for the better).

What are your thoughts on automation? Will AI take little, some, or all of our jobs? Do you think it’s a positive or a negative? We want to know your thoughts; be sure to leave a comment in the comments section below.

Getting Started With Headless Browser Testing

[Guest Blog by Uzi Eilon, CTO, Perfecto]

The “shift left” trend is actually happening, developers as part of the DevOps pipeline need to test more and add more automation testing in order to release faster.

In addition, those tests are almost the last barrier before production, because the traditional testing is going away.

In such case, the standard unit tests are not good enough, and the E2E tests are complicated and require a longer time of setup and prepare.

This is the reason both Google and Mozilla released new JS headless browsers to help their developers to execute automation tests.

The same happened in the mobile area where Apple and Google released the XCUItest and Espresso.

Headless browsers provide the following capabilities in order for the developers to use it:

  • Same language , same IDE , same working environment:
    Most of the web develops work with  JS so these browsers are JS platform , to add new test you should open new class and write standard JS code.
  • FAST Feedback & Execution
    These tests need to be executed fast (sometime every commit) , these browser reduce the UI and rendering “noise” connect to the element directly  and run very fase.
  • Easy to setup
    Developers time is expensive, and developers will not add complicated processes for test , the setup of the tools is a simple npm installation.
  • Access to all the DevTools capabilities
    Developers need more details , these tools give access to all the DevTools data includes accessibility, network, log , security and more.
    Smart tests can be very powerful and cover not only the functionally but also the efficiency

In order to understand more I played with Puppeteer and I’m happy share my thoughts with you.

Installation

Very simple

npm i –save puppeteer

Documentation

Not a lot of examples or discussions about specific issues but I did find the API documentation that contains everything I was looking for.

Objects identification

Intuitive – Same way I connected to my object via any JS  .

Example:

  • by id : page.type(‘#firstName’,‘Uzi’);·
  • by class page.type(‘.class,‘Uzi’);

Sync and waiting for elements

In this case, I have to admit I struggled with the standard wait for navigation command, it was not stable:

await page.waitForNavigation({waitUntil:‘load’})

at the end I used the following :

await page.waitForSelector(‘#firstName’,{visible:true}).then(()=>
{      // do the actions per page
page.screenshot({path: ‘then.png’,fullPage: false})
});

 

UI

As part of my test I tried to verify the screen by taking a screenshot, I liked the way I could change the browser UI capabilities and configure my page:

const fullScreen = {
deviceScaleFactor: 1,
hasTouch: false,
height:  2400 ,
isLandscape: false,
isMobile: false,
width: 1800,
fullPage: true
};
page.setViewport(fullScreen)

 

Other devOpts options:

it is very easy to use, for example I would like to see all my links in a frame

for (let child of frame.childFrames())
{
dumpFrameTree(child, indent + ‘  ‘);
}

 

Summary:

Using the headless browser like Puppeteer was very easy and intuitive, it felt natural to add it as part of my testing code.

In addition, setting up the headless browser environment and executing was very simple and fast.

On the less convenient point, what I found was that to get the results directly into the CI, one should add more scripting code or use other executions methods.

Lastly, this method still ramps up, hence has some small bugs in few features and also lacks documentation and more samples, however, for an early testing tool for white-box/unit testing, it is very promising and well-positioned to complement tools such as Selenium. AS a matter of fact, what I also saw, is that other browser vendors are taking the same approach and investing in headless browsers – Mozilla, Microsoft.

 

P.S: If you want to learn more about the growing technologies and trends in the market, I encourage you to follow My podcast with Uzi Eilon called Testium (Episode 6 is fully dedicated to this subject)

The Rise of Progressive Web Apps and The Impact on Cross-Browser Testing

If we all thought we’ve figured out the digital market from an application type perspective, and have seen the rise of mobile, and the transformation of web to responsive web – now we should all start getting used to a whole new type of application that should change the entire user experience and offer new web functionality – Meet PWA’s.

Google.com is a very clear example of such app, and Apple is about to introduce PWA capabilities in its upcoming WebKit engine.

What are Progressive Web Apps?

If to refer to Google official website dedicated to PWA, Google defines PWA as “A new way to deliver amazing user experiences on the web

David Rubinstein from SD Time, actually add even more insights into these new app types:

PWAs can use device features like cameras, data storage, GPS and motion sensors, face detection, Push notifications, and more. This will pave the way for AR and VR experiences, right on the web. Imagine being able to redecorate your home virtually using nothing but your phone and a PWA. Pan your camera around a room, then use tools on a website to change wall colors, try out furniture, hang new artwork, and more. It may feel like a futuristic fantasy, but it’s close to reality.

The key behind PWA apps is to provide a rich end-user alternative to native apps. These apps can be launched from the device home screen adding layers of performance, reliability, and functionality to a web application without the need to install anything from the app store. In addition, these apps that are still JavaScript based, but with additional specific API’s can work even when there’s no internet connection and that’s  a huge advantage.

PWA apps leverage 2 main architectural features:

  • Service Workers – give developers the ability to manually manage the caching of assets and control the experience when there is no network connectivity.
  • Web App Manifest – That’s the file within the PWA that describe the app, provide metadata specific to the app like icons, splash screens and more. See below an example Google offers for such a descriptor file (Json)
{
  "short_name": "AirHorner",
  "name": "Kinlan's AirHorner of Infamy",
  "icons": [
    {
      "src": "launcher-icon-1x.png",
      "type": "image/png",
      "sizes": "48x48"
    },
    {
      "src": "launcher-icon-2x.png",
      "type": "image/png",
      "sizes": "96x96"
    },
    {
      "src": "launcher-icon-4x.png",
      "type": "image/png",
      "sizes": "192x192"
    }
  ],
  "start_url": "index.html?launcher=true"
}


In order to check the correctness of your PWA checklist and the entire app, Google offers some tools as part of their documentation like this Progressive Web Apps checklist, and their chrome built-in DevTools (see below visual),

As deeply covered in this great Dzone article, good PWAs also implement the PRPL pattern recommended by Google to enhance performance.

What Are the Implications of PWAs for Cross-Browser sites and Mobile Apps?

To understand the implications, I recommend dividing the question into the impacted Personas.

  1. Developers
  2. Testers
  3. Business
  4. End-Users

Each of the above Personas will have different benefits and implications when adopting this kind of apps.

Developers

For existing web developers, this new app type should present a whole new world of innovative opportunities. Since PWAs are still JavaScript based apps, developers do not need to gain new skills, but rather learn the new APIs offered through the Service Workers and see how they can be leveraged by their websites.  Since the PWA app runs on a mobile device and can be launched without a network connection and without any installation, obviously it needs to be validated by developers through unit and integration tests.

Going forward, the market envisions these apps impacting the native apps architecture in a way that there will only be 1 type of app that can seamlessly run on both browsers and mobile devices with one single implementation – that will require a heavier lift and re-work.

 

Testers

For testers, as in every new implementation, new tests (manual, automated) needs to be developed, executed and fit into the overall pipeline.

PWAs, in particular, introduces some unique use cases such as

  • No network operation
  • High performance 
  • Sensors based functionality (Location, Camera for AR/VR and more)
  • Cross-device functionality (like in Responsive, the experience should be the same regardless of the screen size/HW etc.)
  • Adhering to the design and checklist required by Google and soon Apple
  • Accessibility is always a need
  • Security of these apps (with and without being connected to the network)

Business

For the business, the new app types shall help increase the end-user engagement with the business. When having a web application that is richer in functionality, performs fast, and can be “always on” through an easy launch from the customers’ device home screen, this by definition should increase usage and move the needle to the business. My assumption is that large enterprises are already looking into these type of apps as the next-gen RWD apps.

End Users

At the end of the day, all products are aiming to get greater engagements with the customers and beat the competition. Obviously, if the end users will understand the value in these apps, and can “feel” it in their day by day activities, this will be a clear Win-Win situation to both the organization as well as the customer.

To assure end-user experience as Google envisioned when first launching this technology 3 years ago (2015), Dev and Test teams should continue their continuous testing activities, and make sure they are covering sufficient platform, features and use cases between each release and each new release of a platform or device.

To conclude this blog, I highly recommend watching the short video and read the blog from Mozilla on how PWA live within Firefox and how different experience users get from such apps (see below Firefox Wego app within Firefox browser in the background and a PWA Wego app in the foreground)

Happy PWA Testing!

Could The Galaxy Note Be The Gadget of the Year?

A Guest Blog Post by Alli Davis.

The Samsung Galaxy Note 8 is one of the best phones that the smartphone manufacturer has in its lineup. The Note 8 is revered for its large and gorgeous looking display, a fast performance, and improved S-Pen among other features. Despite its release in the latter half of 2017, the Samsung Galaxy Note has already received gadget of the year awards and other accolades. Here’s what the awarding bodies has to say about the Samsung Galaxy Note 8.

Exhibit Tech Awards

The Note 8 received the Flagship Smartphone of the Year Award at the Exhibit Tech Awards in India. It received the award for its screen size battery life as well as a great camera. The award is the most recent one for the Note 8, having come in late December of 2017.

Considering the place Samsung was with the Note 7, the Note 8 is nothing short of an amazing comeback. After the previous iteration’s battery issues, some reports suggested that the company might be considering ending the Galaxy line with the 7. But coming out with the next generation 8 proved to be the right move.

 India Mobile Congress

Just over one month after the Samsung Galaxy Note came out in August, the India Mobile Congress awarded the device with its Gadget of the Year honor. The inaugural event awards honor innovations and initiatives designed to better the ICT ecosystem. The ICT ecosystem refers to everything that goes into making a technological environment for a business or government. This includes things like policies, processes, applications, and of course technologies.

Samsung introduced the Galaxy Note 8 in the Indian marketplace as an effort to give consumers the opportunity to do more with their smartphones. Upon receiving the award, Asim Warsi, the Senior VP of Mobile Business at Samsung India confirmed the effort to improve lives in India. Introducing the phone into the premium sector of the market has only strengthened the company as a leader in that category. It leads the way not only with a great screen and camera system but also with Samsung Pay and Samsung Knox, the security feature of the phone.

MySmartPrice Mobile Of The Year Awards

The  MySmartPrice awards also honored Samsung’s Galaxy Note 8 in December 2017. The device received My SmartPrice’s Flagship Smartphone of the Year 2017 Award for the most feature packed phone in this category. Its 6.3 OMLED screen and camera features captured the attention of people for the award. It was slightly out edged by the iPhone X and Google Pixel 2 in areas such as the screen and camera, But the phone manages to hold its own among the stiff competition in the marketplace.

The Galaxy Note’s Snapdragon core processor makes it a very good option for day to day use according to reviews and performs relatively well in low light conditions. The reduction in price also makes it stand out amongst the competition. In the higher-end phone market, Samsung prices itself quite well for the features it offers.

At a time when Samsung could have ended the Galaxy Note line forever after the Note 7’s battery issue, the company persevered with its Note 8 and the risk paid off. In a highly competitive marketplace, Samsung still produces a Note that is productive, powerful, and innovative with features and a price that a wide variety of customers can enjoy. With all of the Accolades that the Samsung Galaxy Note 8 has received, the future looks bright for future iterations of the phone to come.

Alissa B. Davis is a freelance writer who has a passion for writing about technology and stories about her community. Read her latest stories at http://alissabdavis.com/

Continuous Testing Principles for Cross Browser Testing and Mobile Apps

Majority of organizations are already deep into Agile practices with a goal to be DevOps and continuous delivery (CD) compliant.

While some may say that maximum % of test automation would bring these organizations toward DevOps, It takes more than just test automation.

To mature DevOps practices, a continuous testing approach needs to be in place, and it is more than automating functional and non-functional testing. Test automation is obviously a key enabler to be agile, release software faster and address market events, however, continuous testing (CT) requires some additional considerations.

Tricentis defines CT accordingly:

CT is the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release candidate as rapidly as possible. It evolves and extends test automation to address the increased complexity and pace of modern application development and delivery

The above suggests that a CT process would include a high degree of test automation, with a risk-based approach and a fast feedback loop back to developer upon each product iteration.

How to Implement  CT?

  • A risk-based approach means sufficient coverage of the right platforms (Browsers and Mobile devices) – such platform coverage eliminates business risks and assures high user-experience. Such platform coverage is continuous maintenance requirements as the market changes.
  • Continuous Testing needs an automated end-to-end testing that integrates existing development processes while excluding errors and enabling continuity throughout SDLC. That principle can be broken accordingly:
    • Implement the “right” tests and shift them into the build process, to be executed upon each code commit. Only reliable, stable, and high-value tests would qualify to enter this CT test bucket.
    • Assure the CT test bucket runs within only 1 CI –> In CT, there is no room for multiple CI channels.
    • Leverage reporting and analytics dashboards to reach “smart” testing decisions and actionable feedback, that support a continuous testing workflow. As the product matures, tests need maintenance, and some may be retired and replaced with newer ones.
  • Stable Lab and test environment is a key to ongoing CT processes. The lab should be at the heart of your CT, and should support the above platform coverage requirements, as well as the CT test suite with the test frameworks that were used to develop these tests.
  • Utilize if possible artificial intelligence (AI) and machine learning (ML)/deep-learning (DL) solutions to better optimize your CT test suite and shorten the overall release activities.

  • Continuous Testing is seamlessly integrated into the software delivery pipeline and DevOps toolchain – as mentioned above, regardless of the test framework, IDEs and environments (front-end, back-end, etc.) used within the DevOps pipeline, CT should pick up all relevant testing (Unit, Functional, Regression, Performance, Monitoring, Accessibility and more), execute them in parallel and in an unattended fashion, to provide a “single voice” for a Go/No-Go release – that happens every 2-3 weeks.

Lastly, for a CT practice to work time after time, the above principles needs to be continuously optimized, maintained and adjusted as things change either within the product roadmap or in the market.

Happy CT!

Mobile, Cross Browser Testing, DevOps and Continuous Testing Trends and Projections for 2018

As we about to wrap out 2017, It’s the right time to get ready to what’s expected next year in the mobile, cross-browser testing and DevOps landscape.

To categorize this post, I will divide the trends into the following buckets (there may be few more points, but I believe the below are the most significant ones)

  • DevOps and Test Automation on Steroids Will Become Key for Digital Winners
  • Artificial Intelligence (AI) and Machine Learning (ML)/ Tools alignment as part of Smarter Testing throughout the pipeline
  • IOT and Digital Transformation Moving to Prime Time

 

DevOps and Automation on Steroids

If in 2017, we’ve seen the tremendous adoption of more agile methods, ATDD, BDD and organizations leaving legacy tools behind in favor of faster and more reliable and agile-ready testing tools, such that can fit the entire continuous testing efforts whether they’re done by Dev, BA, Test or Ops.

In 2018, we will see the above growing to a higher scale, where more manual and legacy tools skills are transforming into more modern ones. The growth in continuous testing (CT), Continuous Integration (CI) and DevOps will also translate into much shorter release cadence as a bridge towards real Continuous Delivery (CD)

 

Related to the above, to be ready for the DevOps and CT trend, engineers need to become more deeply familiar with tools like Espresso, XCUITest, Earl Grey and Appium on the mobile front, and with the open-source web-based framework like the headless google project called Puppeteer, Protractor, and other web driver based framework.

In addition, optimizing the test automation suite to include more API and Non-Functional testing as the UX aspect becomes more and more important.

Shifting as many tests left and right is not a new trend, requirement or buzz – nothing change in my mind around the importance of this practice – the more you can automate and cover earlier, the easier it will be for the entire team to overcome issues, regressions and unexpected events that occur in the project life cycle.

AI, ML, and Smarter Test Automation

While many vendors are seeking for tools that can optimize their test automation suite, and shorten their overall execution time on the “right” platforms, the 2 terms of AI and ML (or Deep learning) are still unclear to many tool vendors, and are being used in varying perspectives that not always mean AI or ML 🙂

The end goal of such solutions is very clear, and the problem it aims to solve is real –> long testing cycles on plenty of mobile devices, desktop browsers, IOT devices and more, generates a lot of data to analyze and as a result, it slows down the DevOps engine. Efficient mechanism and tools that can crawl through the entire test code, understand which tests are the most valuable ones, and which platforms are the most critical to test on due to either customer usage or history of issues etc. can clearly address such pain.

Another angle or goal of such tools is to continuously provide a more reliable and faster test code generation. Coding takes time, requires skills, and varies across platforms. Having a “working” ML/AI tool that can scan through the app under test and generate robust page object model, and functional test code that runs on all platforms, as well as “responds” to changes in the UI, can really speed up TTM for many organization and focus the teams on the important SDLC activities in opposed to forcing Dev and Test to spend precious time on test code maintenance.

IOT and The Digital Transformation

In 2017, Google, Apple, Amazon and other technology giants announced few innovations around digital engagements. To name a few, better digital payments, better digital TV, AR and VR development API and new secure authentication through Face ID. IOT this year, hasn’t shown a huge leap forward, however, what I did notice, was that for specific verticals like Healthcare, and Retail, IOT started serving a key role in their digital user engagements and digital strategy.

In 2018, I believe that the market will see an even more advanced wave in the overall digital landscape where Android and Apple TV, IOT devices, Smart Watches and other digital interfaces becoming more standard in the industry, requiring enterprises to re-think and re-build their entire test lab to fit these new devices.

Such trend will also force the test engineers to adapt to the new platforms and re-architect their test frameworks to support more of these screens either in 1 script of several.

Some insights on testing IOT specifically in the healthcare vertical were recently presented by my colleague Amir Rozenberg – recommend to review the slides below

https://www.slideshare.net/AmirRozenberg/starwest-2017-iot-testing/ 

 

Bottom Line

Do not immediately change whatever you do today, but validate whether what you have right now is future ready and can sustain what’s coming in the near future as mentioned above.

If DevOps is already in practice in your organization, fine – make sure you can scale DevOps, shorten release time, increase test and platform automation coverage, and optimize through smarter techniques your overall pipeline.

AI and ML buzz are really happening, however, the market needs to properly define what it means to introduce these into the SDLC, and what would success look like if they do consider leveraging such. From a landscape perspective, these tools are not yet mature and ready for prime time, so that leaves more time to properly get ready for them.

Happy New 2018 to My Followers.