Expediting Web Testing with Puppeteer and Instant Test Automation Technology

The digital landscape has significantly advanced over the past 1-2 years. The new normal in web applications include a shift toward progressive web apps (PWAs), advanced responsive websites, and Flutter apps.

Ensuring both functional as well as API testing of such advance web applications is becoming harder and harder. While there are valid test automation frameworks like Selenium and Cypress, these are covering only a subset of what is needed to complete the entire quality aspects across multiple environments.

New Normal: Instant Test Automation

To address the testing challenge over time, across multiple environments, and with big and dynamic data, tools need to be more intelligent, and utilize AI/ML to really track trends, predict issues prior to them becoming more serious, and guide the developers and testers in real-time.

An instant test automation technology that can leverage a standard test automation framework such as Puppeteer to auto generate both functional as well as API tests on the fly and as things change is a game changer for app developers. This provides alerts and insights in real time that the team can act upon and move much faster.

The way sch solution works is through modeling the website under test, auto generate based on traffic and AI proper tests and runs them to provide the proper insights.

Setting up such solution is easy and needs to be done only once. Also, such tool fits systems that are built atop cloud-native, containers and Amazon ECS.

Benefits of Shifting into Instant and Intelligent Test Automation

With the need to shift left, release faster with confidence, and handle large and complex environment, teams can benefit from a self-generating testing solution that really covers more grounds that a typical code-based framework would. This does not mean that there are no benefits or need for such frameworks like Selenium and others, however, with tools like UP9, teams can complement the code related functional tests with an additional layer of quality validation that touches reliability, APIs, services, security and much more.

Case Study: Quick Tour of Puppeteer & UP9

I took the Google puppeteer framework and connected it with UP9 to try generating a nice model and test automation assets on my local machine.

After following the basic NPM installation instructions, it was easy to launch a local browser and record traffic using the tool.

From my terminal I would then run this command (‘my.new.workspace’ could be any name that the developer selects):

up9 tap:start my.new.workspace

Once I ran that command, using puppeteer, UP9 launches a chrome browser that I can start using to navigate to my website for recording all traffic, actions, API calls and other services. Upon closing the browser, the tool will process all of the flows and generate the model.

Here is the auto generated workspace that was created based on a quick tour I ran with UP9 on the Perforce and Perfecto websites:

The workspace consists of few views:

  1. The generic workspace with all the high-level services model (diagram),
  2. The contracts that were used in my recording,
  3. A detailed traffic analysis with the entire API calls,
  4. Tests that were created and mocks to be used across different environments.

These views were perceived and constructed based on service-traffic, following from the observers of UP9 that were assigned to my designated workspace.

As part of my journey, I landed on the Perfecto free-trial website, and the tool was able to generate the API calls (Request/Response) and allow me to validate that is was working as expected (see below)

If I wanted to also run my auto-generated tests, I have within the UP9 portal (fully SaaS and web-based) a tests area from where I could launch my tests as well.

Bottom Line

As the DevOps market mature towards more intelligent testing using AI and ML, and as the systems under tests (SUTs) are also progressing, there needs to be a realization within the teams that their processes could be improved dramatically using such technologies. Developers productivity together with consistent app quality were a CIO/IT manager top of mind for a long while, and with the recent advancements in the market, teams can really enjoy a new era of smarter ways of working.

Resolving The Quality Visibility of Continuous Testing Across The DevOps Pipeline Environments

Guest Blog Post by: Tzvika Shahaf, Director of Product Management at Perfecto & A Digital Reporting and Analysis Expert

Intro

One of the DevOps challenges in today’s journey for high product release velocity and quality, is to keep track of the CI Jobs/Test stability and health over different environments.

Basically, this is one of the top reasons, bugs slips to production. In other words, lack of visibility into the DevOps delivery pipeline.

Real Life Example

Recently, I’ve met a director of DevOps in a big US-based enterprise who shared with me one of his challenge in this respect.

At the beginning of our meeting he indicated that his organization’s testing activity lacks a view for the feature branches that are under each team’s responsibility.  This gap creates blind spots, where even the release manager is struggling to assemble a reliable picture of the quality status.

The release manager and the QA lead are responsible to verify that after each and every build cycle execution, the failures that occurred are not critical and accordingly approve the merge to a master branch (while also issuing a defect in Jira for bugs/issues that weren’t fixed). The most relevant view for these personas is a suite level list report as the QA lead is still busy with drill down activity to the individual test report level as he is interested also in understanding the failure root cause analysis (RCA).

As part of the triage process, the team is looking to understand the trending history of a Job to see under each build what was the overall test result status. The team is mainly interested in understanding if the issue is an existing defect or a new one. In addition, they look for an option to comment during the triage process (think about it as an offline action taken post the execution).

Focusing on the Problem

So far so good, right? But here’s the problem: the work conducted by each team is not siloed in the teams CI view. There’s no data aggregation to display an holistic overview of the CI pipeline health/trend.

Each team is working on different CI branch and the other teams during the SDLC have no visibility to what happened before/happens now.

Even when there’s a bug, the teams will be required to issue the defect to a different Jira project – so the bug fixing process adding more inefficiency, hence time to the release process.

When the process is broken as identified above, each new functionality on the system test level  or stage, is being merged while not all failures are being inspected (lack of visibility from within the CI).

Jenkins will ignore the system test results and merge even if there’s a failure.

The Right Approach to Solving The problem

The desired visibility from a DevOps perspective is to cover the Jenkins Job/build directly form the testing Dashboard across all branches in order to understand what was changed in that specific build that failed the tests:  which changes in the source code were made? Etc.

What if these teams had a CI overview that capture all testing data that includes:

  1. Project name & version
  2. Job name /Build number
  3. Feature Branch name or Environment description (Dev, Staging, Production etc.)
  4. Scrum Team name  – Optional
  5. Product type – optional

Obviously, Items 1-3 are a MUST in such a solution, since when displayed in the dashboard UI, teams gain maximum visibility into the relevant module the user is referring to when a bug is found,  and this is also part of the standard DevOps process as well. Double clicking on the visibility and efficiency point,  such options can significantly narrow the view of the entire dashboard for each team lead/member , and help them focus only on their relevant feature branches.

When the QA lead reviews the CI dashboard he/she can mark a group of tests and name the actual failure reason, that can sometimes be a bug, a device/environment issue or other.

Feel free to reach out to me if you run into such issues, or have any insights or comment on this point of view – Tzvika’s Twitter Handle

Thank You

Tzvika Shahaf

Eliminating Mobile Test Automation Flakiness and More

Mobile testing by definition is an unstable, flaky and unpredictable activity.

When you think you covered all corners and created a “stable” environment, still, your test cycle often get stuck due to 1 or few items.

In this post, I’ll try to identify some of the key root causes for test automation flakiness, and suggest some preventive actions to eliminate them.

What Can Block Your Test Automation Flow?

From an ongoing experience, the key items that often block test automation of mobile apps are the following:

  • Popups – security, available OS upgrades, login issues, etc.
  • Ready state of DUTs – test meets device in a wrong state
  • Environment – device battery level, network connectivity, etc.
  • Tools and Tets Framework fit – Are you using the right tool for the job?
  • Use of the “right” objects identifiers, POM, automation best practices
  • Automation at Scale – what to automate, on what platforms?

All of the above contribute in one way or the other to the end-to-end test automation execution.

We can divide the above 6 bullets into 2 sections:

  1. Environment
  2. Best Practices

Solving The Environment Factor in Mobile Test Automation

In order to address the test environment contribution to test flakiness, engineers need to have full control over the environment they operate in.

If the test environment and the devices under test (DUT) are not fully managed, controlled and secured the entire operation is at risk. In addition, when we state the term “Test Environment Readiness” it should reflect the following:

  1. Devices are always cleaned up prior to the test execution or are in a known “state”/baseline to the developers and testers
  2. If there are repetitive known popups such as security permissions, install/uninstall popups, OS upgrades, or other app-specific popups, they should be accounted for either in the pre-requisites of the test or should be prevented proactively prior to the execution.
  3. Network stability often is a key for unstable testing – engineers need to make sure that the devices are connected to WiFi or cellular network prior to testing execution start. This can be done either as a pre-requisite validation of the network or through a more generic environment monitoring.

 

Following Best Practices

In previous blogs, I addressed the importance of selecting the right testing frameworks, IDEs as well as leveraging the cloud as part of test automation at scale. After eliminating the risks of the test environment in the above section, it is important to make sure that both developers and test automation engineers follow proper guidelines and practices for their test automation workflow.

Since testing shifted left towards the development team, it is important that both dev and test align on few things:

  1. What to automate?
  2. On what platforms to test?
  3. How to automate (best practices)?
  4. Which tools should be used to automate?
  5. What goes into CI, and what is left outside?
  6. What is the role of Manual and Exploratory testing in the overall cycle?
  7. What is the role of Non-Functional testing in the cycle?

The above points (partial list)  covers some fundamental questions that each individual should be asking continuously to assure that his team is heading in the right direction.

Each of the above bullets can be attributed to at list one if not many best practices.

  1. To address the key question on what to automate, here’s a great tool (see screenshot below) provided by Angie Jones. In her suggested tool, each test scenario should be validated through some kind of metric that adds to a score. The highest scored test cases will be great candidates for automation, while the lowest ones obviously can be skipped.
  2. To address the 2nd question on platform selection, teams should monitor their web and mobile ongoing traffic, perform market research and learn from existing market reports/guides that addresses test coverage.
  3. Answering the question “How to automate” is a big one :). There is more than 1 thing to keep in mind, but in general – automation should be something repetitive, stable, time and cost efficient – if there is something preventing one or more of these objectives, it’s a sign that you’re not following best practices. Some best practices can be around using proper object identifiers, others can reflect building the automation framework with proper tags and catches so when something breaks it is easy to address, and more.
  4. The question around tools and test frameworks again is a big one. Answering it right depends on the project requirements and complexity, the application type (native, web, responsive, PWA), test types (functional, non-functional, unit). In the end, it is important to have a mix of tools that can “play” nicely together and provide unique value without stepping on each other.
  5. Tests that enter CI should be picked very carefully. Here, it is not about the quantity of the tests but about the quality, stability, and value these tests can bring focusing on fast feedback. If a test requires heavy env. setup is flaky by nature, takes too much time to run, it might not be wise to include it in the CI.
  6. Addressing the various testing types in Qs 6 and 7 – it depends on the project objectives, however, there is a clear value for each of the following tests in  mobile app quality assurance:
    1. Accessibility and performance provide key UX indicators about your app and should be automated as much as possible
    2. Security testing is a common oversight by many teams and should be covered through various code analysis, OWASP validations and more.
    3. Exploratory, manual and crowd testing provide another layer of test coverage and insights into your overall app quality, hence, should be in your test plan and divided throughout the DevOps cycle.

Happy Testing

Continuous Testing Principles for Cross Browser Testing and Mobile Apps

Majority of organizations are already deep into Agile practices with a goal to be DevOps and continuous delivery (CD) compliant.

While some may say that maximum % of test automation would bring these organizations toward DevOps, It takes more than just test automation.

To mature DevOps practices, a continuous testing approach needs to be in place, and it is more than automating functional and non-functional testing. Test automation is obviously a key enabler to be agile, release software faster and address market events, however, continuous testing (CT) requires some additional considerations.

Tricentis defines CT accordingly:

CT is the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release candidate as rapidly as possible. It evolves and extends test automation to address the increased complexity and pace of modern application development and delivery

The above suggests that a CT process would include a high degree of test automation, with a risk-based approach and a fast feedback loop back to developer upon each product iteration.

How to Implement  CT?

  • A risk-based approach means sufficient coverage of the right platforms (Browsers and Mobile devices) – such platform coverage eliminates business risks and assures high user-experience. Such platform coverage is continuous maintenance requirements as the market changes.
  • Continuous Testing needs an automated end-to-end testing that integrates existing development processes while excluding errors and enabling continuity throughout SDLC. That principle can be broken accordingly:
    • Implement the “right” tests and shift them into the build process, to be executed upon each code commit. Only reliable, stable, and high-value tests would qualify to enter this CT test bucket.
    • Assure the CT test bucket runs within only 1 CI –> In CT, there is no room for multiple CI channels.
    • Leverage reporting and analytics dashboards to reach “smart” testing decisions and actionable feedback, that support a continuous testing workflow. As the product matures, tests need maintenance, and some may be retired and replaced with newer ones.
  • Stable Lab and test environment is a key to ongoing CT processes. The lab should be at the heart of your CT, and should support the above platform coverage requirements, as well as the CT test suite with the test frameworks that were used to develop these tests.
  • Utilize if possible artificial intelligence (AI) and machine learning (ML)/deep-learning (DL) solutions to better optimize your CT test suite and shorten the overall release activities.

  • Continuous Testing is seamlessly integrated into the software delivery pipeline and DevOps toolchain – as mentioned above, regardless of the test framework, IDEs and environments (front-end, back-end, etc.) used within the DevOps pipeline, CT should pick up all relevant testing (Unit, Functional, Regression, Performance, Monitoring, Accessibility and more), execute them in parallel and in an unattended fashion, to provide a “single voice” for a Go/No-Go release – that happens every 2-3 weeks.

Lastly, for a CT practice to work time after time, the above principles needs to be continuously optimized, maintained and adjusted as things change either within the product roadmap or in the market.

Happy CT!

Mobile, Cross Browser Testing, DevOps and Continuous Testing Trends and Projections for 2018

As we about to wrap out 2017, It’s the right time to get ready to what’s expected next year in the mobile, cross-browser testing and DevOps landscape.

To categorize this post, I will divide the trends into the following buckets (there may be few more points, but I believe the below are the most significant ones)

  • DevOps and Test Automation on Steroids Will Become Key for Digital Winners
  • Artificial Intelligence (AI) and Machine Learning (ML)/ Tools alignment as part of Smarter Testing throughout the pipeline
  • IOT and Digital Transformation Moving to Prime Time

 

DevOps and Automation on Steroids

If in 2017, we’ve seen the tremendous adoption of more agile methods, ATDD, BDD and organizations leaving legacy tools behind in favor of faster and more reliable and agile-ready testing tools, such that can fit the entire continuous testing efforts whether they’re done by Dev, BA, Test or Ops.

In 2018, we will see the above growing to a higher scale, where more manual and legacy tools skills are transforming into more modern ones. The growth in continuous testing (CT), Continuous Integration (CI) and DevOps will also translate into much shorter release cadence as a bridge towards real Continuous Delivery (CD)

 

Related to the above, to be ready for the DevOps and CT trend, engineers need to become more deeply familiar with tools like Espresso, XCUITest, Earl Grey and Appium on the mobile front, and with the open-source web-based framework like the headless google project called Puppeteer, Protractor, and other web driver based framework.

In addition, optimizing the test automation suite to include more API and Non-Functional testing as the UX aspect becomes more and more important.

Shifting as many tests left and right is not a new trend, requirement or buzz – nothing change in my mind around the importance of this practice – the more you can automate and cover earlier, the easier it will be for the entire team to overcome issues, regressions and unexpected events that occur in the project life cycle.

AI, ML, and Smarter Test Automation

While many vendors are seeking for tools that can optimize their test automation suite, and shorten their overall execution time on the “right” platforms, the 2 terms of AI and ML (or Deep learning) are still unclear to many tool vendors, and are being used in varying perspectives that not always mean AI or ML 🙂

The end goal of such solutions is very clear, and the problem it aims to solve is real –> long testing cycles on plenty of mobile devices, desktop browsers, IOT devices and more, generates a lot of data to analyze and as a result, it slows down the DevOps engine. Efficient mechanism and tools that can crawl through the entire test code, understand which tests are the most valuable ones, and which platforms are the most critical to test on due to either customer usage or history of issues etc. can clearly address such pain.

Another angle or goal of such tools is to continuously provide a more reliable and faster test code generation. Coding takes time, requires skills, and varies across platforms. Having a “working” ML/AI tool that can scan through the app under test and generate robust page object model, and functional test code that runs on all platforms, as well as “responds” to changes in the UI, can really speed up TTM for many organization and focus the teams on the important SDLC activities in opposed to forcing Dev and Test to spend precious time on test code maintenance.

IOT and The Digital Transformation

In 2017, Google, Apple, Amazon and other technology giants announced few innovations around digital engagements. To name a few, better digital payments, better digital TV, AR and VR development API and new secure authentication through Face ID. IOT this year, hasn’t shown a huge leap forward, however, what I did notice, was that for specific verticals like Healthcare, and Retail, IOT started serving a key role in their digital user engagements and digital strategy.

In 2018, I believe that the market will see an even more advanced wave in the overall digital landscape where Android and Apple TV, IOT devices, Smart Watches and other digital interfaces becoming more standard in the industry, requiring enterprises to re-think and re-build their entire test lab to fit these new devices.

Such trend will also force the test engineers to adapt to the new platforms and re-architect their test frameworks to support more of these screens either in 1 script of several.

Some insights on testing IOT specifically in the healthcare vertical were recently presented by my colleague Amir Rozenberg – recommend to review the slides below

https://www.slideshare.net/AmirRozenberg/starwest-2017-iot-testing/ 

 

Bottom Line

Do not immediately change whatever you do today, but validate whether what you have right now is future ready and can sustain what’s coming in the near future as mentioned above.

If DevOps is already in practice in your organization, fine – make sure you can scale DevOps, shorten release time, increase test and platform automation coverage, and optimize through smarter techniques your overall pipeline.

AI and ML buzz are really happening, however, the market needs to properly define what it means to introduce these into the SDLC, and what would success look like if they do consider leveraging such. From a landscape perspective, these tools are not yet mature and ready for prime time, so that leaves more time to properly get ready for them.

Happy New 2018 to My Followers.

Complementing Cross-Browser Testing with Headless Unit Testing Solutions

Nothing new in the land of cross-browser testing. Selenium as the underlying API layer serves leading frameworks including WebDriverIO, Protractor (Angular based testing), NightWatchJS, RobotJS and many others.

For web application developers that require fast feedback capabilities post their code commit or bug resolution, there are various testing options. Some would quickly test manually on a set of local or cloud based VM’s, some will develop unit tests (qUnit etc.), but there are also very mature cross browser testing solutions that add more layers of coverage and insights in an automated and easy way.

In a recent eBook that I developed, I’m covering the 10 emerging cross-browser testing tools with a set of considerations around how to choose the right one or the right mix of them.

As can be seen in the 10 tools shown above, there is a mix of a unit as well as E2E functional testing tools mostly javascript based.

Developers who would like to include as part of their quick sanity post commit a validation of the load time it takes the site to load, can easily add this PhantomJS based test into their CI post build acceptance testing and get such visibility after each successful build – that, match the result with a benchmark and take decisions.

In a quick test that I ran on the NFL.com website, I was able to not only detect a slow load of 10sec. but I also identified a long set of errors while the page is loaded.

Another powerful capability tools like PhantomJS can offer is the ability to both capture a specific rendering of a web page by a pre-defined viewport, as well as the ability to generate a page HAR file for network traffic analysis (I am aware that it is not the newest tool, and that Goole already provides a newer version, but still this is a valuable open-source free tool that can help add coverage capabilities to any web development team).

So if as an example, the load time with errors above turns on a red light regarding that site, with 2 simple tests that BTW PhantomJS provides as their starting kit in GIT, the developer can address the above 2 use cases of HAR file generation as well as page rendering screenshot.

The result of the above snippet is the screenshot below:

The HAR file creation that is based on the following GIT code sample will result in the following (I am using the google add-on HTTP Archive Viewer for Chrome, it can be done simply with other HAR viewers as well):

Bottom line

You can download my latest eBook and learn more, but in general – leverage both unit testing powerful tools, as well as traditional E2E tests, hence they do complement each other and add their unique value – And it’s Free!

Happy Testing!

Trends in Cross Browser Testing and Web Development

Typically, i”ll write a lot on mobile app testing, tools, trends, coverage and such.

In this blog, I actually wanted to share some up to date trends as I see them in the web landscape.

The web market has shifted a lot over the past years alongside the mobile space. We see a clear use of specific development languages, development frameworks and of course specific test frameworks aimed to test Angular, jQuery, Bootstrap,.Net and other websites.

From a Dev Language perspective, the web FE developer is mostly using the following languages as part of his job:

Sourcehttp://vintaytime.com/premium/top-programming-languages/

As a clear trend in web development, it shows that JavaScript is the leading language used by web developers. It’s actually not a huge surprise since if you move to the top frameworks used by these web developers, you will see quite a few that are based on JavaScript.

There are some trends seen recently by developers around shifting to non AngularJS web development framework like Aurelia, React, and Vue.JS that are seeing a growing usage and adoption by developers due to considerations such as (larger list of Pro’s/Con’s are in source 1 below). With this trend in mind, and you’ll read in my references below, the new solutions are still not as complete as AngularJS is.

  • Shorter learning curve
  • Simple to use, clean
  • Flexibility
  • Lightweight compared to others (less than half the size of AngularJS e.g.)
  • Better performing
  • Easy to integrate with other front-end stack tools
  • Responsive server-side rendering (Vue.JS supports it, reduces time for users to see rendered content)
  • SEO Friendly
  • Good documentation and Community Support
  • Good debugging capabilities

Source 1: https://www.slant.co/topics/4306/~angular-js-alternatives

Source 2: https://w3techs.com/technologies/details/js-angularjs/all/all

Now, that we have seen the leading web development languages, and frameworks used these days, let’s drill down into what test automation engineers are adopting.

Selenium without a doubt is the leading and base for most frameworks, however, even in this space, we see new and innovative test frameworks such as Casper.JSTestCafeBuster.JSNightwatch.JS together with the traditional Webdriver.IO and of course Protractor.

If we examine the below visual (SourceNPM Trends), it’s a clear market dominance between Selenium and Protractor that underneath its implementation does uses Selenium WebDriver, and supports Jasmine and Mocha tools.

The advantage of tools like Protractor is that they support much easier web sites that were developed in various frameworks like AngularJS, Vue.JS etc. Such advantage allows test automation engineers to agnostically use them for multiple websites regardless to the frameworks they are built with.

It is not that easy, and pink as I described above, but it does give a good headstart when starting to build the test automation Foundation.

Thre are few other players in that space that are aimed at specific unit testing, and headless browser testing (Phantom.JS, Casper.JS, JSDom etc.).

As I blogged in the past, from a test automation strategy perspective, teams might find it beneficial and more complete to leverage a set of test frameworks rather than using only one. If the aim is to have non-UI headless browser testing together with Unit testing and also UI based testing, then a combination of tools like Protractor, Casper.JS, QUnit might be a valid approach.

I hope you find this post useful, and can “swim” in the hectic tools landscape. As always, it is important to match the tool to the product requirements, development methodology (BDD, Agile, Waterfall etc.), supported languages and more.

Optimizing Android Test Automation Development

Now that we are a few weeks away from Google I/O, and we understand that the complex Android landscape is becoming, even more, complex let’s explore a way Android teams can optimize and plan their test automation across the different platforms and devices.

In the past, I’ve written about the need to connect the 3 layers:

  • Application under test
  • Test code itself
  • Device/OS under test

I related back to my old patent that I jointly submitted years ago in the days of J2ME and also wrote a chapter about it in my newly published book (The Digital Quality Handbook)

Problem Definition

Android OS families support different capabilities and the gap is growing from one Android SDK to the next. As an example, Android devices older than 6.0, cannot support Android Doze for battery usage optimization, or cannot support App Shortcuts (see below example from Google Photos app). These diffs introduce a challenge to Dev and test team that innovate and take advantage of these features since the test code that shall run against these features and devices needs to be turned only towards devices can actually support it.

How can teams sustain a test automation suite that runs specifically on the right devices per supported features?

Proposed Approach

While I don’t have a bulletproof, magic pill to address all challenges that may occur as a result of the above problem, I can surely recommend an approach as described below.

Important to note, that being aware of the problem, is a step toward resolving it 🙂

Assess Your App and DUT:

  • Map the different features that your app supports or requires the users to grant permissions for
  • Examine your device test lab and filter the devices that support and does not support these specific features

To manage the above, teams can leverage the following:

  • Use an existing ADB command that extracts from a connected device/s the supported feature
    • ADB SHELL
      • PM LIST FEATURES

After running the above command, you will get an output that looks like the below …

Compare The Outputs

Once you know your DUT’s capabilities, as well as your App, features to be tested, you can run a simple output comparison and see what can and can’t be tested – From that point, the optimization should be mostly manual – you will setup your test execution and CI in the lab accordingly. While it isn’t simple enough, it still offers a sustainable approach + awareness to both dev and test team that can be useful throughout the development, debugging and testing activities. In the below visual you can see a capabilities diff between a Samsung Note 5/Android 7.0 (left column) and an older  Samsung running Android 5.x device capabilities (right column). An immediate diff out of a larger list that I have shows the fingerprint functionality that is supported on the Note 5 but not on the other Samsung device. Such insight should be used when planning the feature testing across these 2 devices (this is just one example).

Bottom Line

As Google continues to innovate and add more features, the existing devices and test framework will find it hard to close the gaps and that’s a challenge that teams need to be aware of, plan for, and optimize so their release vehicles and velocity remain solid.

Happy Optimization!

Google Mobile Friendly With Perfecto and Quantum

Guest Blog Post by Amir Rozenberg, Senior Director of Product Management, Perfecto

resize

Google recently announced “Mobile First Indexing”, from Google:

To make our results more useful, we’ve begun experiments to make our index mobile-first. Although our search index will continue to be a single index of websites and apps, our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results (Source).

screen-shot-2017-02-13-at-5-33-26-pm

More recently they made the Google Mobile-Friendly tool and guidelines available. A very nice interactive version is available here, and images at the bottom of the thread, while there’s also an API (which, thanks to Google, can allow users to exercise first before they code). Google also offers code snippets in several languages.

Notes:

  • Google takes a URL and renders it. If you run multiple executions in parallel there’s no point in sending the same URL from every execution because the result would be the same
  • Google returns basically “MOBILE_FRIENDLY” or not. Suggest to set the assert on that
  • The current API differs from the UI such that it only provides the results for Mobile friendly (and the UI gives also mobile and web page speed). Hopefully, Google adds that to the response 😉
  • This will probably not work for internal pages as Google probably doesn’t have a site-to-site secure connection with your network.

 

For developers and testers who do not have time, testing mobile friendliness repeatedly probably will simply not happen. That’s why I integrated Google Mobile-Friendly API into Quantum:

  • Added 2 Gherkin commands
// If you navigate directly to this page
Then I check mobileFriendly URL "http://www.nfl.com"
// If you got to this page through clicks
Then I check mobileFriendly current URL
  • Added the Gherkin command support (GoogleMobileFriendlyStepsDefs.java)
  • And the script example is pretty simple:
@Web
Feature: NFL validate

  @SimpleValidation
  Scenario: Validate NFL
    Given I open browser to webpage "http://www.nfl.com"
    Then I check mobileFriendly current URL
    Then I check mobileFriendly URL "http://www.nfl.com"
    Then I wait "5" seconds to see the text "video"

 

That’s it. Next steps:

 

Ideas for future improvement:

  • You can automate the validation such that every click would trigger a check with Google behind the scenes.

Just for fun, some more screenshots for detailed analysis for NFL.com:

 

screen-shot-2017-02-13-at-5-33-48-pm

 

screen-shot-2017-02-13-at-5-34-09-pm

screen-shot-2017-02-13-at-5-34-23-pm

 

 

Criteria’s for Choosing The Right Open-Source Test Automation Tools

I presented last night at a local Boston meetup hosted by BlazeMeter a session together with my colleague Amir Rozenberg.

The subject was the shift from legacy to open-source frameworks, the motivations behind and also the challenges of adopting open-source without a clear strategy especially in the digital space that includes 3 layers:

  1. Open source connectivity to a Lab
  2. Open-source and its test coverage capabilities (e.g. Can open-source framework support system level, visual analysis, real environment settings and more)
  3. open-source reporting and analysis capabilities.

During the session, Amir also presented an open-source BDD/Cucumber based test framework called Quantum (http://projectquantom.io)

Full presentation slides can be found here:

Happy Reading

Eran & Amir