Resolving The Quality Visibility of Continuous Testing Across The DevOps Pipeline Environments

Guest Blog Post by: Tzvika Shahaf, Director of Product Management at Perfecto & A Digital Reporting and Analysis Expert

Intro

One of the DevOps challenges in today’s journey for high product release velocity and quality, is to keep track of the CI Jobs/Test stability and health over different environments.

Basically, this is one of the top reasons, bugs slips to production. In other words, lack of visibility into the DevOps delivery pipeline.

Real Life Example

Recently, I’ve met a director of DevOps in a big US-based enterprise who shared with me one of his challenge in this respect.

At the beginning of our meeting he indicated that his organization’s testing activity lacks a view for the feature branches that are under each team’s responsibility.  This gap creates blind spots, where even the release manager is struggling to assemble a reliable picture of the quality status.

The release manager and the QA lead are responsible to verify that after each and every build cycle execution, the failures that occurred are not critical and accordingly approve the merge to a master branch (while also issuing a defect in Jira for bugs/issues that weren’t fixed). The most relevant view for these personas is a suite level list report as the QA lead is still busy with drill down activity to the individual test report level as he is interested also in understanding the failure root cause analysis (RCA).

As part of the triage process, the team is looking to understand the trending history of a Job to see under each build what was the overall test result status. The team is mainly interested in understanding if the issue is an existing defect or a new one. In addition, they look for an option to comment during the triage process (think about it as an offline action taken post the execution).

Focusing on the Problem

So far so good, right? But here’s the problem: the work conducted by each team is not siloed in the teams CI view. There’s no data aggregation to display an holistic overview of the CI pipeline health/trend.

Each team is working on different CI branch and the other teams during the SDLC have no visibility to what happened before/happens now.

Even when there’s a bug, the teams will be required to issue the defect to a different Jira project – so the bug fixing process adding more inefficiency, hence time to the release process.

When the process is broken as identified above, each new functionality on the system test level  or stage, is being merged while not all failures are being inspected (lack of visibility from within the CI).

Jenkins will ignore the system test results and merge even if there’s a failure.

The Right Approach to Solving The problem

The desired visibility from a DevOps perspective is to cover the Jenkins Job/build directly form the testing Dashboard across all branches in order to understand what was changed in that specific build that failed the tests:  which changes in the source code were made? Etc.

What if these teams had a CI overview that capture all testing data that includes:

  1. Project name & version
  2. Job name /Build number
  3. Feature Branch name or Environment description (Dev, Staging, Production etc.)
  4. Scrum Team name  – Optional
  5. Product type – optional

Obviously, Items 1-3 are a MUST in such a solution, since when displayed in the dashboard UI, teams gain maximum visibility into the relevant module the user is referring to when a bug is found,  and this is also part of the standard DevOps process as well. Double clicking on the visibility and efficiency point,  such options can significantly narrow the view of the entire dashboard for each team lead/member , and help them focus only on their relevant feature branches.

When the QA lead reviews the CI dashboard he/she can mark a group of tests and name the actual failure reason, that can sometimes be a bug, a device/environment issue or other.

Feel free to reach out to me if you run into such issues, or have any insights or comment on this point of view – Tzvika’s Twitter Handle

Thank You

Tzvika Shahaf

Eliminating Mobile Test Automation Flakiness and More

Mobile testing by definition is an unstable, flaky and unpredictable activity.

When you think you covered all corners and created a “stable” environment, still, your test cycle often get stuck due to 1 or few items.

In this post, I’ll try to identify some of the key root causes for test automation flakiness, and suggest some preventive actions to eliminate them.

What Can Block Your Test Automation Flow?

From an ongoing experience, the key items that often block test automation of mobile apps are the following:

  • Popups – security, available OS upgrades, login issues, etc.
  • Ready state of DUTs – test meets device in a wrong state
  • Environment – device battery level, network connectivity, etc.
  • Tools and Tets Framework fit – Are you using the right tool for the job?
  • Use of the “right” objects identifiers, POM, automation best practices
  • Automation at Scale – what to automate, on what platforms?

All of the above contribute in one way or the other to the end-to-end test automation execution.

We can divide the above 6 bullets into 2 sections:

  1. Environment
  2. Best Practices

Solving The Environment Factor in Mobile Test Automation

In order to address the test environment contribution to test flakiness, engineers need to have full control over the environment they operate in.

If the test environment and the devices under test (DUT) are not fully managed, controlled and secured the entire operation is at risk. In addition, when we state the term “Test Environment Readiness” it should reflect the following:

  1. Devices are always cleaned up prior to the test execution or are in a known “state”/baseline to the developers and testers
  2. If there are repetitive known popups such as security permissions, install/uninstall popups, OS upgrades, or other app-specific popups, they should be accounted for either in the pre-requisites of the test or should be prevented proactively prior to the execution.
  3. Network stability often is a key for unstable testing – engineers need to make sure that the devices are connected to WiFi or cellular network prior to testing execution start. This can be done either as a pre-requisite validation of the network or through a more generic environment monitoring.

 

Following Best Practices

In previous blogs, I addressed the importance of selecting the right testing frameworks, IDEs as well as leveraging the cloud as part of test automation at scale. After eliminating the risks of the test environment in the above section, it is important to make sure that both developers and test automation engineers follow proper guidelines and practices for their test automation workflow.

Since testing shifted left towards the development team, it is important that both dev and test align on few things:

  1. What to automate?
  2. On what platforms to test?
  3. How to automate (best practices)?
  4. Which tools should be used to automate?
  5. What goes into CI, and what is left outside?
  6. What is the role of Manual and Exploratory testing in the overall cycle?
  7. What is the role of Non-Functional testing in the cycle?

The above points (partial list)  covers some fundamental questions that each individual should be asking continuously to assure that his team is heading in the right direction.

Each of the above bullets can be attributed to at list one if not many best practices.

  1. To address the key question on what to automate, here’s a great tool (see screenshot below) provided by Angie Jones. In her suggested tool, each test scenario should be validated through some kind of metric that adds to a score. The highest scored test cases will be great candidates for automation, while the lowest ones obviously can be skipped.
  2. To address the 2nd question on platform selection, teams should monitor their web and mobile ongoing traffic, perform market research and learn from existing market reports/guides that addresses test coverage.
  3. Answering the question “How to automate” is a big one :). There is more than 1 thing to keep in mind, but in general – automation should be something repetitive, stable, time and cost efficient – if there is something preventing one or more of these objectives, it’s a sign that you’re not following best practices. Some best practices can be around using proper object identifiers, others can reflect building the automation framework with proper tags and catches so when something breaks it is easy to address, and more.
  4. The question around tools and test frameworks again is a big one. Answering it right depends on the project requirements and complexity, the application type (native, web, responsive, PWA), test types (functional, non-functional, unit). In the end, it is important to have a mix of tools that can “play” nicely together and provide unique value without stepping on each other.
  5. Tests that enter CI should be picked very carefully. Here, it is not about the quantity of the tests but about the quality, stability, and value these tests can bring focusing on fast feedback. If a test requires heavy env. setup is flaky by nature, takes too much time to run, it might not be wise to include it in the CI.
  6. Addressing the various testing types in Qs 6 and 7 – it depends on the project objectives, however, there is a clear value for each of the following tests in  mobile app quality assurance:
    1. Accessibility and performance provide key UX indicators about your app and should be automated as much as possible
    2. Security testing is a common oversight by many teams and should be covered through various code analysis, OWASP validations and more.
    3. Exploratory, manual and crowd testing provide another layer of test coverage and insights into your overall app quality, hence, should be in your test plan and divided throughout the DevOps cycle.

Happy Testing

The Rise of Automation: Why Coding is Becoming a Job for Everyone

Guest Blog by Wendy Dessler

Photo by Alex Knight on Unsplash

According to USA Today, as many as 73 million jobs may not exist by 2030; however, we could see benefits like an increase in productivity and economic growth “counteract” job losses. But, will it? Will it not? That is a question several economists mull over.

What we can say, given our increasing reliance and incorporation of technology, we predict that coding will be a skillset job seekers will list on their applications—along with the usual: typing, Excel-savvy, multitasking, you name it.

But does everyone have to be proficient in coding? Are all of our jobs reliant on how well we know to insert code? Read on to find out!

Automation Affects the Job Market   

Because of automation, we are seeing more self-checkouts in grocery stores and convenience stores. Even some fast food stores—like Wendy’s—are planning to install not one but more than 1,000 self-ordering kiosks in its stores.

The Why

The truth is, automation cuts market costs and saves companies a lot of money. Think about it: unlike an employee, self-ordering kiosks and self-checkouts, you name it aren’t going to get tired; they won’t come in late to work, and the chances of getting an order wrong is a lot lower. In other words, it’s safe to say automation minimizes human error.

More Automation Threatens Certain Industries

According to CNN Money, cashiers and toll booth operators, drivers, and fast food jobs are all at risk of becoming fully automated; however, other jobs like nurses, doctors, youth sports coaches, hairstylists and cosmetologists, songwriters, and social workers are far from being at risk.

Pretty much any job that incorporates a high degree of “humanness”—like empathizing with the client or using emotional nuances to do a better job—are in the clear.

 

But what how are tech industries faring? How is automation affecting the way developers and quality assurance do their jobs?  

A Shift in the Tech Sector

Thanks to automation, we even see a shift in the tech sector. DevOps teams, for instance, are responsible for deploying automation (not to mention making sure the site is up and the server doesn’t crash, etc.).

While those in the quality assurance sectors are relying more and more on coding to test products before they are distributed.

 

In some instances, they even do some degree of auto-testing, and some may use error monitoring software to rule out bugs, which you can learn more about at https://stackify.com/error-monitoring/.  

But Where Does Coding Factor In?

The age of automation is upon us, and with that, employees and job seekers can strengthen their skill sets by learning how to code. However—and this is a big, however—unless you work in the tech sector, you probably don’t have to be a coding expert.

 

Even Low-Risk Jobs Can Benefit from Knowing Some Coding Basics

As we mentioned, songwriters aren’t at risk at automation (as of yet), but that doesn’t mean they can’t benefit from learning some code and using automation in their song recording processes.

Long story short, learning how to code can only make you are a  more in-demand employee. But, like we said, unless your job is tech-related, it isn’t an end-all-be-all.

Photo by John Schnobrich on Unsplash

Final Thoughts

Automation doesn’t have to because for concern. Remember, we’ve gone through several economic transitions—such as the Industrial Revolution or even the shift from letter writing to email—that have changed how we do business (for the better).

What are your thoughts on automation? Will AI take little, some, or all of our jobs? Do you think it’s a positive or a negative? We want to know your thoughts; be sure to leave a comment in the comments section below.

Getting Started With Headless Browser Testing

[Guest Blog by Uzi Eilon, CTO, Perfecto]

The “shift left” trend is actually happening, developers as part of the DevOps pipeline need to test more and add more automation testing in order to release faster.

In addition, those tests are almost the last barrier before production, because the traditional testing is going away.

In such case, the standard unit tests are not good enough, and the E2E tests are complicated and require a longer time of setup and prepare.

This is the reason both Google and Mozilla released new JS headless browsers to help their developers to execute automation tests.

The same happened in the mobile area where Apple and Google released the XCUItest and Espresso.

Headless browsers provide the following capabilities in order for the developers to use it:

  • Same language , same IDE , same working environment:
    Most of the web develops work with  JS so these browsers are JS platform , to add new test you should open new class and write standard JS code.
  • FAST Feedback & Execution
    These tests need to be executed fast (sometime every commit) , these browser reduce the UI and rendering “noise” connect to the element directly  and run very fase.
  • Easy to setup
    Developers time is expensive, and developers will not add complicated processes for test , the setup of the tools is a simple npm installation.
  • Access to all the DevTools capabilities
    Developers need more details , these tools give access to all the DevTools data includes accessibility, network, log , security and more.
    Smart tests can be very powerful and cover not only the functionally but also the efficiency

In order to understand more I played with Puppeteer and I’m happy share my thoughts with you.

Installation

Very simple

npm i –save puppeteer

Documentation

Not a lot of examples or discussions about specific issues but I did find the API documentation that contains everything I was looking for.

Objects identification

Intuitive – Same way I connected to my object via any JS  .

Example:

  • by id : page.type(‘#firstName’,‘Uzi’);·
  • by class page.type(‘.class,‘Uzi’);

Sync and waiting for elements

In this case, I have to admit I struggled with the standard wait for navigation command, it was not stable:

await page.waitForNavigation({waitUntil:‘load’})

at the end I used the following :

await page.waitForSelector(‘#firstName’,{visible:true}).then(()=>
{      // do the actions per page
page.screenshot({path: ‘then.png’,fullPage: false})
});

 

UI

As part of my test I tried to verify the screen by taking a screenshot, I liked the way I could change the browser UI capabilities and configure my page:

const fullScreen = {
deviceScaleFactor: 1,
hasTouch: false,
height:  2400 ,
isLandscape: false,
isMobile: false,
width: 1800,
fullPage: true
};
page.setViewport(fullScreen)

 

Other devOpts options:

it is very easy to use, for example I would like to see all my links in a frame

for (let child of frame.childFrames())
{
dumpFrameTree(child, indent + ‘  ‘);
}

 

Summary:

Using the headless browser like Puppeteer was very easy and intuitive, it felt natural to add it as part of my testing code.

In addition, setting up the headless browser environment and executing was very simple and fast.

On the less convenient point, what I found was that to get the results directly into the CI, one should add more scripting code or use other executions methods.

Lastly, this method still ramps up, hence has some small bugs in few features and also lacks documentation and more samples, however, for an early testing tool for white-box/unit testing, it is very promising and well-positioned to complement tools such as Selenium. AS a matter of fact, what I also saw, is that other browser vendors are taking the same approach and investing in headless browsers – Mozilla, Microsoft.

 

P.S: If you want to learn more about the growing technologies and trends in the market, I encourage you to follow My podcast with Uzi Eilon called Testium (Episode 6 is fully dedicated to this subject)

The Rise of Progressive Web Apps and The Impact on Cross-Browser Testing

If we all thought we’ve figured out the digital market from an application type perspective, and have seen the rise of mobile, and the transformation of web to responsive web – now we should all start getting used to a whole new type of application that should change the entire user experience and offer new web functionality – Meet PWA’s.

Google.com is a very clear example of such app, and Apple is about to introduce PWA capabilities in its upcoming WebKit engine.

What are Progressive Web Apps?

If to refer to Google official website dedicated to PWA, Google defines PWA as “A new way to deliver amazing user experiences on the web

David Rubinstein from SD Time, actually add even more insights into these new app types:

PWAs can use device features like cameras, data storage, GPS and motion sensors, face detection, Push notifications, and more. This will pave the way for AR and VR experiences, right on the web. Imagine being able to redecorate your home virtually using nothing but your phone and a PWA. Pan your camera around a room, then use tools on a website to change wall colors, try out furniture, hang new artwork, and more. It may feel like a futuristic fantasy, but it’s close to reality.

The key behind PWA apps is to provide a rich end-user alternative to native apps. These apps can be launched from the device home screen adding layers of performance, reliability, and functionality to a web application without the need to install anything from the app store. In addition, these apps that are still JavaScript based, but with additional specific API’s can work even when there’s no internet connection and that’s  a huge advantage.

PWA apps leverage 2 main architectural features:

  • Service Workers – give developers the ability to manually manage the caching of assets and control the experience when there is no network connectivity.
  • Web App Manifest – That’s the file within the PWA that describe the app, provide metadata specific to the app like icons, splash screens and more. See below an example Google offers for such a descriptor file (Json)
{
  "short_name": "AirHorner",
  "name": "Kinlan's AirHorner of Infamy",
  "icons": [
    {
      "src": "launcher-icon-1x.png",
      "type": "image/png",
      "sizes": "48x48"
    },
    {
      "src": "launcher-icon-2x.png",
      "type": "image/png",
      "sizes": "96x96"
    },
    {
      "src": "launcher-icon-4x.png",
      "type": "image/png",
      "sizes": "192x192"
    }
  ],
  "start_url": "index.html?launcher=true"
}


In order to check the correctness of your PWA checklist and the entire app, Google offers some tools as part of their documentation like this Progressive Web Apps checklist, and their chrome built-in DevTools (see below visual),

As deeply covered in this great Dzone article, good PWAs also implement the PRPL pattern recommended by Google to enhance performance.

What Are the Implications of PWAs for Cross-Browser sites and Mobile Apps?

To understand the implications, I recommend dividing the question into the impacted Personas.

  1. Developers
  2. Testers
  3. Business
  4. End-Users

Each of the above Personas will have different benefits and implications when adopting this kind of apps.

Developers

For existing web developers, this new app type should present a whole new world of innovative opportunities. Since PWAs are still JavaScript based apps, developers do not need to gain new skills, but rather learn the new APIs offered through the Service Workers and see how they can be leveraged by their websites.  Since the PWA app runs on a mobile device and can be launched without a network connection and without any installation, obviously it needs to be validated by developers through unit and integration tests.

Going forward, the market envisions these apps impacting the native apps architecture in a way that there will only be 1 type of app that can seamlessly run on both browsers and mobile devices with one single implementation – that will require a heavier lift and re-work.

 

Testers

For testers, as in every new implementation, new tests (manual, automated) needs to be developed, executed and fit into the overall pipeline.

PWAs, in particular, introduces some unique use cases such as

  • No network operation
  • High performance 
  • Sensors based functionality (Location, Camera for AR/VR and more)
  • Cross-device functionality (like in Responsive, the experience should be the same regardless of the screen size/HW etc.)
  • Adhering to the design and checklist required by Google and soon Apple
  • Accessibility is always a need
  • Security of these apps (with and without being connected to the network)

Business

For the business, the new app types shall help increase the end-user engagement with the business. When having a web application that is richer in functionality, performs fast, and can be “always on” through an easy launch from the customers’ device home screen, this by definition should increase usage and move the needle to the business. My assumption is that large enterprises are already looking into these type of apps as the next-gen RWD apps.

End Users

At the end of the day, all products are aiming to get greater engagements with the customers and beat the competition. Obviously, if the end users will understand the value in these apps, and can “feel” it in their day by day activities, this will be a clear Win-Win situation to both the organization as well as the customer.

To assure end-user experience as Google envisioned when first launching this technology 3 years ago (2015), Dev and Test teams should continue their continuous testing activities, and make sure they are covering sufficient platform, features and use cases between each release and each new release of a platform or device.

To conclude this blog, I highly recommend watching the short video and read the blog from Mozilla on how PWA live within Firefox and how different experience users get from such apps (see below Firefox Wego app within Firefox browser in the background and a PWA Wego app in the foreground)

Happy PWA Testing!

Could The Galaxy Note Be The Gadget of the Year?

A Guest Blog Post by Alli Davis.

The Samsung Galaxy Note 8 is one of the best phones that the smartphone manufacturer has in its lineup. The Note 8 is revered for its large and gorgeous looking display, a fast performance, and improved S-Pen among other features. Despite its release in the latter half of 2017, the Samsung Galaxy Note has already received gadget of the year awards and other accolades. Here’s what the awarding bodies has to say about the Samsung Galaxy Note 8.

Exhibit Tech Awards

The Note 8 received the Flagship Smartphone of the Year Award at the Exhibit Tech Awards in India. It received the award for its screen size battery life as well as a great camera. The award is the most recent one for the Note 8, having come in late December of 2017.

Considering the place Samsung was with the Note 7, the Note 8 is nothing short of an amazing comeback. After the previous iteration’s battery issues, some reports suggested that the company might be considering ending the Galaxy line with the 7. But coming out with the next generation 8 proved to be the right move.

 India Mobile Congress

Just over one month after the Samsung Galaxy Note came out in August, the India Mobile Congress awarded the device with its Gadget of the Year honor. The inaugural event awards honor innovations and initiatives designed to better the ICT ecosystem. The ICT ecosystem refers to everything that goes into making a technological environment for a business or government. This includes things like policies, processes, applications, and of course technologies.

Samsung introduced the Galaxy Note 8 in the Indian marketplace as an effort to give consumers the opportunity to do more with their smartphones. Upon receiving the award, Asim Warsi, the Senior VP of Mobile Business at Samsung India confirmed the effort to improve lives in India. Introducing the phone into the premium sector of the market has only strengthened the company as a leader in that category. It leads the way not only with a great screen and camera system but also with Samsung Pay and Samsung Knox, the security feature of the phone.

MySmartPrice Mobile Of The Year Awards

The  MySmartPrice awards also honored Samsung’s Galaxy Note 8 in December 2017. The device received My SmartPrice’s Flagship Smartphone of the Year 2017 Award for the most feature packed phone in this category. Its 6.3 OMLED screen and camera features captured the attention of people for the award. It was slightly out edged by the iPhone X and Google Pixel 2 in areas such as the screen and camera, But the phone manages to hold its own among the stiff competition in the marketplace.

The Galaxy Note’s Snapdragon core processor makes it a very good option for day to day use according to reviews and performs relatively well in low light conditions. The reduction in price also makes it stand out amongst the competition. In the higher-end phone market, Samsung prices itself quite well for the features it offers.

At a time when Samsung could have ended the Galaxy Note line forever after the Note 7’s battery issue, the company persevered with its Note 8 and the risk paid off. In a highly competitive marketplace, Samsung still produces a Note that is productive, powerful, and innovative with features and a price that a wide variety of customers can enjoy. With all of the Accolades that the Samsung Galaxy Note 8 has received, the future looks bright for future iterations of the phone to come.

Alissa B. Davis is a freelance writer who has a passion for writing about technology and stories about her community. Read her latest stories at http://alissabdavis.com/

Continuous Testing Principles for Cross Browser Testing and Mobile Apps

Majority of organizations are already deep into Agile practices with a goal to be DevOps and continuous delivery (CD) compliant.

While some may say that maximum % of test automation would bring these organizations toward DevOps, It takes more than just test automation.

To mature DevOps practices, a continuous testing approach needs to be in place, and it is more than automating functional and non-functional testing. Test automation is obviously a key enabler to be agile, release software faster and address market events, however, continuous testing (CT) requires some additional considerations.

Tricentis defines CT accordingly:

CT is the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release candidate as rapidly as possible. It evolves and extends test automation to address the increased complexity and pace of modern application development and delivery

The above suggests that a CT process would include a high degree of test automation, with a risk-based approach and a fast feedback loop back to developer upon each product iteration.

How to Implement  CT?

  • A risk-based approach means sufficient coverage of the right platforms (Browsers and Mobile devices) – such platform coverage eliminates business risks and assures high user-experience. Such platform coverage is continuous maintenance requirements as the market changes.
  • Continuous Testing needs an automated end-to-end testing that integrates existing development processes while excluding errors and enabling continuity throughout SDLC. That principle can be broken accordingly:
    • Implement the “right” tests and shift them into the build process, to be executed upon each code commit. Only reliable, stable, and high-value tests would qualify to enter this CT test bucket.
    • Assure the CT test bucket runs within only 1 CI –> In CT, there is no room for multiple CI channels.
    • Leverage reporting and analytics dashboards to reach “smart” testing decisions and actionable feedback, that support a continuous testing workflow. As the product matures, tests need maintenance, and some may be retired and replaced with newer ones.
  • Stable Lab and test environment is a key to ongoing CT processes. The lab should be at the heart of your CT, and should support the above platform coverage requirements, as well as the CT test suite with the test frameworks that were used to develop these tests.
  • Utilize if possible artificial intelligence (AI) and machine learning (ML)/deep-learning (DL) solutions to better optimize your CT test suite and shorten the overall release activities.

  • Continuous Testing is seamlessly integrated into the software delivery pipeline and DevOps toolchain – as mentioned above, regardless of the test framework, IDEs and environments (front-end, back-end, etc.) used within the DevOps pipeline, CT should pick up all relevant testing (Unit, Functional, Regression, Performance, Monitoring, Accessibility and more), execute them in parallel and in an unattended fashion, to provide a “single voice” for a Go/No-Go release – that happens every 2-3 weeks.

Lastly, for a CT practice to work time after time, the above principles needs to be continuously optimized, maintained and adjusted as things change either within the product roadmap or in the market.

Happy CT!

Mobile, Cross Browser Testing, DevOps and Continuous Testing Trends and Projections for 2018

As we about to wrap out 2017, It’s the right time to get ready to what’s expected next year in the mobile, cross-browser testing and DevOps landscape.

To categorize this post, I will divide the trends into the following buckets (there may be few more points, but I believe the below are the most significant ones)

  • DevOps and Test Automation on Steroids Will Become Key for Digital Winners
  • Artificial Intelligence (AI) and Machine Learning (ML)/ Tools alignment as part of Smarter Testing throughout the pipeline
  • IOT and Digital Transformation Moving to Prime Time

 

DevOps and Automation on Steroids

If in 2017, we’ve seen the tremendous adoption of more agile methods, ATDD, BDD and organizations leaving legacy tools behind in favor of faster and more reliable and agile-ready testing tools, such that can fit the entire continuous testing efforts whether they’re done by Dev, BA, Test or Ops.

In 2018, we will see the above growing to a higher scale, where more manual and legacy tools skills are transforming into more modern ones. The growth in continuous testing (CT), Continuous Integration (CI) and DevOps will also translate into much shorter release cadence as a bridge towards real Continuous Delivery (CD)

 

Related to the above, to be ready for the DevOps and CT trend, engineers need to become more deeply familiar with tools like Espresso, XCUITest, Earl Grey and Appium on the mobile front, and with the open-source web-based framework like the headless google project called Puppeteer, Protractor, and other web driver based framework.

In addition, optimizing the test automation suite to include more API and Non-Functional testing as the UX aspect becomes more and more important.

Shifting as many tests left and right is not a new trend, requirement or buzz – nothing change in my mind around the importance of this practice – the more you can automate and cover earlier, the easier it will be for the entire team to overcome issues, regressions and unexpected events that occur in the project life cycle.

AI, ML, and Smarter Test Automation

While many vendors are seeking for tools that can optimize their test automation suite, and shorten their overall execution time on the “right” platforms, the 2 terms of AI and ML (or Deep learning) are still unclear to many tool vendors, and are being used in varying perspectives that not always mean AI or ML 🙂

The end goal of such solutions is very clear, and the problem it aims to solve is real –> long testing cycles on plenty of mobile devices, desktop browsers, IOT devices and more, generates a lot of data to analyze and as a result, it slows down the DevOps engine. Efficient mechanism and tools that can crawl through the entire test code, understand which tests are the most valuable ones, and which platforms are the most critical to test on due to either customer usage or history of issues etc. can clearly address such pain.

Another angle or goal of such tools is to continuously provide a more reliable and faster test code generation. Coding takes time, requires skills, and varies across platforms. Having a “working” ML/AI tool that can scan through the app under test and generate robust page object model, and functional test code that runs on all platforms, as well as “responds” to changes in the UI, can really speed up TTM for many organization and focus the teams on the important SDLC activities in opposed to forcing Dev and Test to spend precious time on test code maintenance.

IOT and The Digital Transformation

In 2017, Google, Apple, Amazon and other technology giants announced few innovations around digital engagements. To name a few, better digital payments, better digital TV, AR and VR development API and new secure authentication through Face ID. IOT this year, hasn’t shown a huge leap forward, however, what I did notice, was that for specific verticals like Healthcare, and Retail, IOT started serving a key role in their digital user engagements and digital strategy.

In 2018, I believe that the market will see an even more advanced wave in the overall digital landscape where Android and Apple TV, IOT devices, Smart Watches and other digital interfaces becoming more standard in the industry, requiring enterprises to re-think and re-build their entire test lab to fit these new devices.

Such trend will also force the test engineers to adapt to the new platforms and re-architect their test frameworks to support more of these screens either in 1 script of several.

Some insights on testing IOT specifically in the healthcare vertical were recently presented by my colleague Amir Rozenberg – recommend to review the slides below

https://www.slideshare.net/AmirRozenberg/starwest-2017-iot-testing/ 

 

Bottom Line

Do not immediately change whatever you do today, but validate whether what you have right now is future ready and can sustain what’s coming in the near future as mentioned above.

If DevOps is already in practice in your organization, fine – make sure you can scale DevOps, shorten release time, increase test and platform automation coverage, and optimize through smarter techniques your overall pipeline.

AI and ML buzz are really happening, however, the market needs to properly define what it means to introduce these into the SDLC, and what would success look like if they do consider leveraging such. From a landscape perspective, these tools are not yet mature and ready for prime time, so that leaves more time to properly get ready for them.

Happy New 2018 to My Followers.

The Role of Artificial Intelligence in E-Commerce Industry

A Guest Blog Post by Ravindra Savaram

When we think about artificial intelligence(AI), the first thing that comes to our mind is a self-driving vehicle or a Terminator-like robot. Both robots and AI are not exactly one and the same. Though often utilized together with bots, artificial intelligence particularly refers to the stimulation of human intelligence processes by machines. AI powers many technologies that we utilize on a daily basis.

Whether AI is something that you have been monitoring for a while or it’s something that you have just come across, it is undeniable that AI is beginning to influence many industries. One place where it is really changing things is e-commerce. From creating personal buying assistants to personalizing the shopping experience, artificial intelligence is something that retailers cannot ignore.

Many areas of e-commerce are ripe for innovation driven by artificial intelligence. Every enhancement to logistics efficiency, recommendations, pricing, or marketing provides retailers an edge over the competition. Retail creates and consumes large volumes of data from various channels. In fact, there is so much data that it’s not possible for a human being to analyze it. These are the ideal conditions for machine learning.

For various data analysis methods, machine learning is the overarching name. In these methods, the computers get insights in data without actually being told where to look for the insights. When exposed a large amount of data, machine learning algorithms can extract patterns and utilize them to generate predictions or insights about the future conditions.

When you upload a cat picture to cat Google Photos, it knows that the object in the picture is a cat. The code that identifies the cat is not written by a human but it is developed as a result of exposing the algorithm to a large number of cat photos(also, the photos of things that are not a cat).

Recommendations

The same principle explained above can be put to use in many e-commerce areas. For instance, the retailers have become really good at recommending products that are related, but the people who do online shopping knows that the recommendation engines get it wrong very frequently. The recommendation engines are quite limited as they can have access to only a small set of data and the ways they can reason about that data are restricted. Machine learning helps merchants find much better ways of modeling the behavior of users so they can make close to exact recommendations about what a customer is interested in buying. With machine learning, the AI can make predictions based on past data. The predictions include what customers will buy next, their typical price threshold, their preferred device and channel, and so on.

Pricing

Today, the online retail industry is constantly presenting new challenges to COOs and CMOs when it comes to pricing. There is a fierce competition among the e-commerce brands of all sizes and guises. Even for an online merchant for a 1000 product list, somewhat tweaking in manual price can become a task that is almost impossible to accomplish. The environment is changing constantly – rival prices, logistics, currency conversions, and delivery rates are just a small sample of numbers or circumstances prone to change continuously.

The tweaking of prices in real time can be accomplished with artificial intelligence depending on multiple data sets including stock levels, resource capacity, internal operations, customer demand and behavior, and market conditions.

High-level of Assistance

The personal shopping assistants were a luxury of the rich or famous once upon a time. Artificial Intelligence has shaken up this scenario and in the process, revolutionized e-commerce. This conversational and intelligent technology has extended to customer service as well. The chatbots and personal digital shopping assistants can suggest the best available products to new visitors in a manner similar to humans, recommend new deals to your returning customers, answer the queries of a customer and provide suggestions, and alert customers when products they may prefer to purchase come into stock or change in price.

Conclusion

By merging intelligent neural networks with massive data sets, the applications of artificial intelligence will help e-commerce companies to build unparalleled competitiveness in the market. The impact of Personalized Merchandising supported by artificial intelligence on the e-commerce industry will continue to rise in the coming years. They not only optimize or automate current processes but also help retailers to avoid common pitfalls of manual approaches, giving customers an enriched experience to maximize profits.

About the Author:

Savaram Ravindra was born and raised in Hyderabad, popularly known as the ‘City of Pearls’. He is presently working as a Content Contributor at Mindmajix.comHis previous professional experience includes Programmer Analyst at Cognizant Technology Solutions. He holds a Masters degree in Nanotechnology from VIT University. He can be contacted atsavaramravindra4@gmail.com. Connect with him also on LinkedIn and Twitter.

UAV COMMUNICATION SYSTEMS FOR EFFECTIVE CONTROL OF FLYING VEHICLES

Guest Blog Contributed by SkyHopper

Unmanned Aerial Vehicles’ (UAV) Quick Evolution

Fast development in technology in the last few years has seen increased abilities of the Unmanned Aerial Vehicles. Though once restricted to government bodies and a few large establishments, today, the large majority of people can afford a UAV or an RC drone.

Think of your very own mobile phone and all its capabilities. The miniaturization of Micro Electro Systems, Gyroscopes, batteries, and Accelerometers as applied in mobile technology has given birth to the ability to build various types of UAVs. This progress has paved the way to the achievement of innumerable operations that initially seemed impossible.

Need For UAV Communication Systems to control the flying vehicles

Who is in charge of the traffic flow of the possible millions of manned, unmanned and radio controlled aircrafts that are likely to flood the sky over our head? How safe are we as these vehicles fly about?

Almost every country in the world has some kind of traffic management system in place to control the flow of the vehicles on the roads. You must have noticed the road signs, designated pedestrian crossing areas, and traffic lights. They all make movement easier and safer for everyone.

At the moment, the systems used to control the taking off and landing of all the planes; both commercial and military are controlled by staff on the ground. These systems are inadequate for UAVs since they highly depend on physical signs.

With the increase in production of flying vehicles, there is need to coordinate their flow. Putting in place suitable UAV communication systems will ensure safety in air traffic flow. Bearing in mind that these vehicles fly at different attitudes, some are fully independent while others are partial; the system should provide a 3-Dimensional method of navigating the sky. This will prevent collisions of the aerial vehicles which would endanger the lives of people on the ground.

 

Are There Any UAV Communication Systems Currently In Place?

Privately owned drones are only allowed to fly within designated regions. Such drones have the systems installed before they can fly. To guarantee safety, the flying vehicles need to provide correct information to the ground control stations and fellow vehicles. Their response to instructions must also be in real-time. There are several companies that are working on systems with such abilities.

Reliable UAV Communication Systems as The Future of Flying Vehicles

There is a need to integrate various technologies to come up with a system that will provide the required safety for the autonomous vehicles. Such will include latest mobile and Internet technologies as well as other technologies that may not necessarily require the Internet. It will perform remote measuring of various parameters, transmit control information and record a complete 360° video of each vehicle. That UAV system will act as the invisible traffic light for the flying vehicles.

Unlike other industries that have benefited from learning from their mistakes, autonomous vehicles don’t have that luxury. The risk is too big to play with. It is important to prioritize the safety of the vehicles and the people. Safety mechanisms must be integrated into every layer of the UAV Communication System from the get-go.

 Author Biography

This article was contributed by SkyHopper, a leading bi-directional drone data link manufacturer, and innovator. Visit us at Skyhopper.biz