A guest post by Amir Rozenberg, Sr. Director of product management at Perfecto Mobile & Yoram (Perfecto CTO)
========================================================================
As 2016 winds down and we look into 2017, I’d like to share few thoughts on trends in delivering high-quality digital applications. This post is organized in two parts: Start with a collection of observations of key market trends and drivers; followed with continuous quality implications. While this article focuses on examples and quotes from the banking vertical, the discussion is certainly applicable more broadly.
2017 – Year of accelerated digital transformation with user experience in focus:

- Digital transformation: 2016 provided more compelling success stories of large players focusing on their digital transformation (How DBS became the best digital bank in the world). Digitization, driven by competitive pressure (both by incumbents as well as the introduction of new players) is front and center according to research by Banking Technology.
- Increased formal digital engagement: Consumers want independence and access, ‘self-serve’ or ‘Direct Banking’ in the banking space, at a time and location of their preference. As A.T. Kearney reports , many transactions done today by the bank will be done by the customer. That is a big opportunity that many banks capitalize on via their online apps.
- Informal digital presence: Implementation of multi-channel approach inclusive of social networks is proliferating as a complementary touch point with the customer. Activities include proactively scanning social networks for disgruntled customers and addressing their challenges individually, marketing and advertising new services and streamlining services. For example, allowing users to log into their online bank account using their social network presence. One bank reports a short-term marketing effort in those channels increased 13% mobile app enrollments, doubling their social activity following etc. (Source)
- Improved operating efficiency: Another strong driver for the digital transformation is introducing efficient processes and leveraging new channels to better manage expenditure. According to McKinsey, digital transformation can enable retail banks to increase revenues in upward of 30% and decrease expenditure by 20%-25%.
In addition to slashing branches for efficient online service (Ally Bank: “Instead of spending money on expensive branches, we pass the savings on to you” ), DBS also treated customer care flows and improved their efficiency.
- User Experience & efficiency: functional and delightful experience are top of mind for both customers as well as vendors: “Our customers don’t benchmark us against banks,” said Hari Gopalkrishnan, BOFA CIO of client-facing platform technology, in an interview with InformationWeek. “They benchmark us against Uber and Amazon.”. On the application side, there is a strong emphasis on the end user efficiency as they try to accomplish the task at hand. At DBS, 250 million customer hours wasted each year were saved in 2016 by improving bank-side processes and enabling more online self-serve transactions by customers.
Further, investments are made in the area of streamlining user flows. One example is text entry replacement by using the onboard sensors: location-via GPS, check and barcode scanning via the camera, or speech dictation via Siri, Google speech etc. “Solutions that combine the ability to find, analyze and assemble data into formats that can be read in natural language will improve both the speed and the quality of business content delivery. Personal assistants such as Apple’s Siri and Microsoft’s Cortana — as well as IBM Watson, with its cognitive technology — provide richer and more interactive content.”- From Gartner’s report “Top Strategic Predictions for 2016 and Beyond: The Future Is a Digital Thing“
Challenges & Implications
Having looked at some of the trends, the implication of accelerated digital transformation, focus on user experience now are met with competitive pressure and the need to deliver product faster to market. Many organizations are adopting agile methodologies, and from a continuous quality perspective, let’s discuss some challenges and implications:
- (Simplified) Automation at scale: With an ever-growing matrix of test cases and shrinking test cycle, I believe (/hope) attention will be given in 2017 to designing/implementing automation at scale in organizations. There are many challenges, such as the skill set of testers/developers, cross-team collaboration, tooling, timing, and budgets. But everyone needs to agree that having over 20%-25% of manual testing, or spending too much time maintaining test script maintenance is simply blocking coverage, quality and eventually business success.
- Always-on lab: A robust and stable lab is an absolute requirement to remove fragility, the biggest reason for test failure. Almost always this means a lab in the cloud: Device on a desk or a local lab will break the regression test in the critical moment.
- Scripts: Need to be based on core functions which are robust, mature and reusable. Handle unplanned cases (popups), apply retry mechanism, apply baseline for the environment (Database in the right place, servers are operational, WiFi is on, no popups, etc.)
- Switch to “always green” mode: if you need to review your results every morning and spend 1-2 hours on it, you’re doing something wrong and you can’t really scale your automation. Prefer green over coverage. A false negative is the worse disease of automation. Unless something really bad happens, your scripts should end with green status, no excuses.
- Test automation framework: This is the building block that will drive sustainability and scale. There are many frameworks out there, some are offered as open source, some by system integrators. Here are some thoughts on selecting your test framework:
- Skill set and tools match: Testers skills vary. We typically see many manual testers who are supported by a core team of advanced coders. Those who code, operate in Java, Javascript, C#, Python, Ruby etc. The foundation for automation at scale is a set of sustainable and reusable automation assets (so your time spent on maintenance is limited): A solid object repository, scripts, test orchestration, and planning etc. A good framework will allow multi-language support (in a way that supports your organization) and multi-skill-level: Java-like scripting for codes, BDD-like scripting (ex.: Cucumber) for those new to coding.
- Open source and modular: There are significant benefits to adopting technology with the wide community behind it. We recommend selecting a solution that is made of architectural components that are best in class in its function. Shameless plug: Perfecto and Infostretch came with an open source framework named Quantum. The objective is to provide a complete package where experienced as well as non-coders can write test scripts based on smart XPath and a smart object repository via Java and Cucumber. Test orchestration and reporting are also available via TestNG. The framework is made of best-of-breed open source components, we welcome the community and our customers to try it out and give feedback.
- Efficient, role-driven reporting: Considering automation at scale, it is mandatory to provide a strong reporting suite. The tester needs to quickly recognize trends in last nights’ regression test (hopefully made of thousands of executions), and drill down from the trends to a single test execution to analyze the root cause and compare it against yesterday’s execution or another platform. By the same token, quality visibility (‘build health’) mandates transparency. (another shameless plug:) Perfecto’s new set of dashboards enables the application team as well as executives to understand build weaknesses and establish an informed go/no-go decision.
Next, on the challenges list, let’s discuss the client side:
- Client side increased capabilities… and vulnerabilities: The focus to drive more functionality and streamline the user experience drive a larger coverage matrix. We’re seeing thick client applications strengthening and enriching the experience. As such, demand for test coverage and process change are needed
- Coverage: The proliferation of using onboard sensors and connected devices (see below) will drive the need to expand the test environment and capabilities to include those. In 2016 we saw increased use of image injection scenarios as well as touch ID. I believe in 2017 speech input will gain momentum as well as ongoing innovation around augmented reality (perhaps less in banking, rather other verticals). All of these scenarios need to be covered.
- Of particular interest is the IoT space: This is an area that’s been growing rapidly over the last few years, whether consumer products, medical or industrial applications. “The relationships between machines and people are becoming increasingly competitive, as smart machines acquire the capabilities to perform more and more daily activities“. In Gartner’s IoT forecast, we estimate that, by 2020, more than 35 billion things will be connected to the Internet. Particularly in banking, IoT represents an interesting opportunity. For example, authenticating the customer in the branch with biometric sensing accessories will streamline experiences and increase security. Other examples include contactless payments and access to account functions from a wearable accessory (Source)
- Accessibility: since 2015 over 240 businesses have been sued over accessibility compliance according to WSJ. TechCrunch advice is to plan, design and implement accessibility in the app, and work closely with a council on the regulations. We too are seeing growing demand for accessibility related coverage. This is certainly an area we’re going to pay close attention to in the near future.
- Coverage: The proliferation of using onboard sensors and connected devices (see below) will drive the need to expand the test environment and capabilities to include those. In 2016 we saw increased use of image injection scenarios as well as touch ID. I believe in 2017 speech input will gain momentum as well as ongoing innovation around augmented reality (perhaps less in banking, rather other verticals). All of these scenarios need to be covered.
Lastly, process and maturity changes:
- Process changes and (quality) maturity growth: As we work with our customer and the market is maturing, we are fortunate to observe market trends that are happening (some slower than others, but still)
- CoE collaboration with the app team: As agile is implemented in many of our customers, we’re witnessing first hand the autonomy and independence driven by the application team. While the application team creates, builds and tests code, they still may need centralized perspective on quality practices and tooling needed for success. Some of these teams consider and curious about the application usage and behavior in production (more below). Our recommendation to the various teams is to seek and drive collaboration: for example, establish a slim, robust and stable acceptance test to build a common language between the tests that are run in the cycle and those running after.
- DevOps: Teams are seeking efficiencies and transparency in managing quality across the SDLC. One area is shifting testing earlier in the cycle, covered nicely by my colleague Eran.
The second is using the same (testing) tool and approach for production (‘Testing in production’). This approach reduces delays in time to launch (no need to wait until production monitoring scripts are created) and enables visibility to usage, behavior and weaknesses of the app in production. I believe traditional production-dedicated APM tools will need to find ways to merge into the cycle to survive.
- New entrants in the developer/quality workflow: I believe new opportunities and startups will emerge in areas that simplify/automate testing and predict the impact of code changes in advance. Imagine proactive code scanning tools integrated with production insight that direct developers about the risk associated with the area of code you’re about to touch, or automated test code/plan generators. This area has plenty of room for growth.
Advanced areas
- Shifting, even more, testing left: In further mature teams we find that automation drives further test cases into the nightly regressions test, because it provides high value (as opposed to finding bugs late) and it’s frankly, possible. The area of introducing real user conditions in the cycle provides critical insight. Other areas such as small-scale multi user test (for code efficiency and multi-threading behavior), some level of security, accessibility tests etc.
- Transitioning testing to user journey: Lastly, an advanced topic I’d like to mention is changing the perspective from a matrix of test cases X browsers/devices/OS/version X real user conditions into diagonal, if you will, user journeys across platforms. To go by example, take a typical journey for bank loan research: consumers are likely to begin their engagement on a large screen where they research rates, terms etc. They may summarize findings and take decisions using excel (local/online). They may apply for a loan over their desktop browser or on their tablet, and then continue the interaction on their mobile device. In those ‘diagonal’ test journeys one could then classify different journeys into different personas: There’s the consumer, the loan officer, the customer care professional etc. All of whom go through different journeys on different screens. Being able to provide a quality score per build for specific persona’ journey would be very meaningful to the business to make decisions. The point being, in a limited time available for quality activities, one could consider creating user journeys across specific screens rather than trying the complete rows and columns across test cases and screens matrix.
To summarize, I see an exciting 2017 coming with lots of changes and innovation in delivering digital applications that work. Certainly looking forward to taking part!
One Reply to “My 2017 Continuous Quality Predictions”