Top 5 Test Automation Challenges & How to Solve Them

Learn what teams face in scaling up their test automation

Eran: Thank you for joining Manish. I think we have prepared some cool material and presentations for the audience and I think that it’s going to be really cool judging by the amount of people signing up for this webinar. Obviously, people are still facing challenges, shifting testing left, moving to DevOps so hopefully, we’ll provide some practical advices and tools for them to at least overcome some of the challenges, if not all.

Before we get started, just a few regular housekeeping information. We have a live and ongoing chat in the GoToWebinar panel, so please use it. We are ready to address any questions you may have during the webinar and also after the webinar. We will have one live poll in the middle of this webinar and we’ll appreciate your participation. All the slides, recording and some giveaways, I definitely encourage you to stay until the end. We have some cool giveaways for the attendees today. Everything will be shared with those of you that are online and registered for this webinar.

With that, let’s start the webinar and the agenda is the read effect. We’re going to start with some of the market trends especially in the DevOps and continuous testing. We’ll then dive into some key challenges, at least the ones that we have seen with our customer base and our close partners from Apexon. We’ll then dive into a very cool and useful test framework called quantum that was jointly built with a great help from Apexon and we’ll demonstrate for those of you online how you can easily get started with automation with this framework. Stay with us. Q&A will be also at the end but throughout the webinar, feel free to use the chat panel as I’ve mentioned.

What’s driving all these challenges in test automation? I think that we are seeing a tremendous shift in the industry towards DevOps. Quality is not just separate silo in the SDSC in the organization. Everyone is trying to shift as much as possible activities that also includes testing, of course, to the left and actually eliminate stabilization phases, eliminate long regression cycles so they push more innovation faster to the market and stay competitive.

While they are doing this, while they are trying to achieve this goal, they’re facing many challenges. Some of the challenges will touch today and they involve skill sets, tools, techniques in developing the test themselves and, of course, the time constraint that is always a key challenge for these teams.

What we believe is the fact that we’ll also touch on it during the webinar, in order to be successful in DevOps, you need to have a continuous testing mechanism, continuous testing engine in place that is able to execute pair commit, pair each code change most of your test cases that are also unit functional regression performance and many other types of tests. You actually keep up with all the market changes and the innovation.

With that shift to DevOps, we also see that team structure is changing. The shifting responsibilities is moving across the teams and developers are finding themselves also responsible for quality together with the business analysts and the project and product managers. Everyone is responsible for quality in today’s agile and DevOps reality and this feels from an opportunity to drive faster innovation, to be faster with new cool features but also you need to make sure that you have the proper quality otherwise, you will simply fail.

Before we shift to the challenges, I wanted to just share with you because we are in the digital space of mobile and web and Perfecto is releasing on a quarterly basis a magazine, a reference guide called Facto, that includes different indexes, different list of devices and browsers that we recommend to test based on market trends and market share of different OEMs.

Three weeks before we release the Q1 edition, I thought to share with you– And you will get these slides again after the webinar– What’s going to happen in Q1 and we are just a few weeks away from the Mobile World Congress in Barcelona. A lot of new devices are coming into this space, the new Samsungs, LGs and also new operating systems are constantly being released. Just this week, we heard about iOS 11.2.5, we just know about iOS 11.3 that’s even better. The market doesn’t stop and this is one of the key challenges that we’ll address during this webinar.

Definitely, you can leverage these planning tools for coverage for both mobile as well as web and plan ahead. Look at what’s happening in the at least next six months in the mobile and web space and see how your test lab is built and whether you need to do some changes or upgrades. That’s as a background tip for those of you online.

I think that part of the challenges that we are seeing that are related to DevOps and continuous testing is this reality. Organizations start to build test automation, they have either excels with a lot of manual test cases and then they want to rush into doing as many test automations as possible. They create some scripts and then they add on top of that more scripts or even more scripts and then they start to run them almost on a daily basis without really structuring it and see whether they are stable enough, whether they fit in the overall HDLC and then they fail.

They fail and when they fail it stops everything, it stops that continuous integration workflow, it stops the development, feature development and causes a lot of pain and bottlenecks.

We see in the market that this approach causes huge flakiness in the test, we see that the test labs, when you are moving in that approach is not really equipped, not ready to handle this amount of tests and this is why we actually recommend– This is the green path here, to start slow, to start small, create your few robust scripts, make sure that they run daily and iteratively add more scripts into your test suite.

Again, not everything needs to run as part of the CI unless you see that your CI is green with as many robust tests as possible. Only then, go add more tests into your suite. While everything I’m saying here seems very trivial when you look at large enterprises, large projects when you have different dependencies across different teams, it’s easier said than done and we see this bottleneck. Imagine this is a huge project when you have different teams that not always engage with each other, this is where we see that the bottlenecks become more serious.

Manish: I would like to add that we somehow in the enterprise have taken pride in writing more test cases and calling it a success. Success criteria often becomes how many test cases we have to resolve a particular problem. We often exhibit this behavior that even a small change should reflect, I mean, a small variation in the functional point should reflect or even like a different data point should reflect in its own test case. When you’re automating, the best practice is definitely to consolidate the test cases and really ask the question, “Do we need these many test cases?”

Doing automation well will itself present an opportunity to eliminate hard-coded data, optimize the scripts such that duplication in functional point coverage et cetera can be eliminated. I think often, we forget to take that opportunity and automate manual test cases as us, which is actually a big problem.

Eran: That’s a good point and I think Manish to your comment, we see changes in KPIs for both developers and testers and if you know a few years ago the teams were measured about adding as many test cases as possible to the test suite. I think now they are looking at more efficiency, adding the right test cases on the right platform that keeps the build green unless that is really an issue or a defect in your system. I think that with these DevOps and continuous transformation, we also see different measures and different KPIs also being modified or tuned.

With that Manish, let’s see what the audience are suffering today and will kick a short poll with you guys online. I will really appreciate to hear your voices. What are you challenged with today in your journey towards DevOps and continuous testing?

While the team are voting Manish, what do you think is the greatest challenge in automation today? We know that we have identified five big challenges but what would you mark as the biggest one?

Manish: I think that there is a combination of different challenges. If I have to pick the biggest one, it would be– And we’ll be talking about it in the challenges itself, is to blindly believe automation must work at the UI level. When we try to automate everything purely at a UI level and not at the right interface, that presents the best opportunity to automate it as. We create a big backlog of high maintenance test cases and work, which naturally becomes an inhibitor to shift left.

Recognizing the right layer to test something at and automating it at that layer is a very wise and prudent when you are looking at large-scale automation.

Eran: Thanks, Manish. I think it makes a lot of sense. I would add on top of that and it’s not really a challenge but it’s actually an outcome of solving the challenges, what I’m hearing in the industry today is that there is a trust issue with the teams whether they are the developers or test engineers in the automation test results. When you lose trust in whatever you invented and developed as far as the test automation, I’d say that’s a very bad place to be in.

I think that once you start small as we identified, you test the right layers, you merge API with functional with UI test and so forth in the right level and build the trust across the different teams, I think that’s when you’re going to see the biggest leverage and the trust is definitely a big issue here.

With that, let’s close the poll and thank you guys online for voting. The time allocation competes with the test automation stability and we’re definitely going to touch these two things. We believe and we engaged with many many customers today, the time is definitely one of the biggest barriers for automation or for the automation stability because you need to take risky decisions.

You take risk when you decide I have only two weeks for a sprint and I only have this amount of time to develop test automation and also debug the test and also fix issues then you become very picky about what you invest in and this is where you are either right or wrong and this is one of the challenges. Definitely getting the test stable and running consistently when– We’ll touch it very shortly is a big challenge as well.

Thank you, guys online, for voting. We’ll share the results with all of you after the webinar as well.

Let’s dive into the five challenges and the two of them we’ve just heard your voice. Definitely, the time constraint is a big issue, the lack of test automation stability, the flakiness of the test, the unreliability of the test that can be either due to wrong practices when developing the test code or using an unstable test environment, not putting the test lab in the ready state position before running the tests and different other reasons that we are seeing. This is the second challenge out of the five.

Test execution management, what I am executing on which platforms repeatedly. This the third charges we are seeing and we are seeing tools like AI and machine learning. We are looking at the Facto magazine that I shared with you in the beginning of this webinar. We see that the test execution, the test planning is a third challenge in the overall workflow.

The fourth one and we’ll dive into each of these changes very shortly is the evolution and the maintenance of the test cases. Quality as the test is a moment in time. You want your test codes but then you add more features, you add more capabilities. You either want to retire an irrelevant test case or you want to maintain, you want to update it otherwise, this test loses its value to the overall process.

Teams are, especially when you have this time constraint, finding it very hard to keep up with maintaining all test cases and this is why we recommend that you start small, you don’t build a massive test way that includes thousands of test cases because maintaining them throughout the time is definitely going to be painful. When you develop your test thing about the ongoing maintenance that you will need to do throughout the product lifecycle.

The fifth one is obviously the tool stack. We talked about the shifting list, we talked about developers owning quality as well. You need to have a much more synchronized toolchain within your entire team, within the entire splint, if you like, that everything is consolidated. If the developer is using a test tool like Expresso or XCUI test and the test automation engineer is using a queue, at the end, you want to have one quality view, quality dashboard that reports to you the real quality level. This is still, again, a challenge in the overall process.

Manish, do you have anything to add on top of this slide?

Manish: Yes, one comment which is not necessarily a direct automation challenges but it often poses itself as a big hindrance, is actually your organization structure. Teams that are structured to automate in a test center of excellence way often are less agile and enable themselves less to shift left compared to teams that are submerged in one agile team and are actually doing automation within the agile team. That organization structure consideration is significant when it comes to actually shifting left.

Eran: That’s a very good point. Let’s dive in. I think we touched a bit of the test pyramid, testing on the right levels of the test pyramid UI versus API. Dividing the right test cases across the pipeline. When you have the time bounder, you have the time constraint and we see that this is your biggest challenge. The guys online, the audience voted that this is their biggest challenge.

What we are seeing as solutions of practices teams take to overcome that challenge is one of the four, actually a marriage of a mix of this four, some are shifting to packages such as ATDD, acceptance test-driven development or behavior driven development, BDD and this gives them several benefits.

One, everyone is speaking the same language, the business analyst, the developer, and the testers are defining their testing criteria’s, they actually develop the tests before they develop or write the code and then they validate, again, these requirements, the entire software they are developing and this definitely shrinks time, overcomes skill set in the organization. We’ll touch more about that when we introduce quantum later on in this webinar.

In addition, this is going back to my first recommended slide, starting small. Today, we still see too many, I would say, shadow CIs. We see too many confused integration processes either running by the QA team, by the Ops, and by the Devs. Definitely, when you want to have one single tool, one single quality dashboard, you need to have one single CI that encapsulates everything. The un-test, the functional test, the performance, whatever test cases that you believe should come into the CI, needs to run in one single CI. You’ll definitely save much time doing so.

Not all the teams that I’m talking with are still in that phase. We still see independent CI jobs running in the QA department, COE, and the Devs. Again, considering adjusting your test pyramid, I think that Manish talked about it as well, just touched it. I think that there are test tools and there are test types that often takes less time for you to execute and get value or fast feedback. Some of them are against API testing. We see a trend in the mobile and web space with test execution frameworks that are very fast, that are very easy to set up and get you fast feedback.

In the mobile space, we see XCUI test and Espresso mostly adopted by the developers and learning from their IDEs to drive faster feedback back to the developers. We see in the web space Google driving puppeteer as an headless as well as UI browsers that allows developers to get very fast insight and feedback including things like code coverage, network and performance analysis and these kind of things. When you get them as a developer earlier in the workflow, you find the issues earlier and definitely you save cost and time.

This is one way or few ways, if you like, to shorten the release schedule. One of them again is, use the right tools, use the right process or methodology like BDD, eliminate redundant CI workflows and adjust your testing types, your test pyramid to focus on the high value and the faster feedback test types.

With regards to test automation stability, when you develop your test code, it’s easy. You develop one script, you run it locally and it works fine. When this test is executed as part of the test automation suite, as part of CI, as part of the regression or smoke, whatever test execution you’re doing, it’s a different story.

We see that the test is either becoming flaky, the dependency that this test or the assumption, pre-condition that the test had doesn’t fit the other test cases in the suite and this is when things become flaky and blank. Another thing that we’re seeing that contribute to the unreliability of the test cases is the way teams are working with objects. Everyone knows about the Page Object Model best practice. Actually, I read a nice article the other day talking against using Page Object Model. I wouldn’t say here, use Page Object Model, don’t use Page Object Model.

I’m just saying you need to have a way of managing the entire object repositories for your project whether it’s as part of object repository, a page factory, like you have in selenium or other ways, but you need a governance mechanism to maintain your object because objects change toward the product lifecycle, the developers change the object type. I’ve just played a bit with one of the responsive websites and one skill that I developed a month ago, today doesn’t work. Why? Because the mobile object is different today than it was last month.

Everything is consistently changing and you need a way to manage this pain in the object repository. In addition, you need to have some kind of, I would say, measurements and KPIs with regards to how the test suite looks like. We know that developers and test teams actually put some measures and KPIs and test themselves how efficient they are when they develop the test. How many flaky tests they have, how many parts they have, which test cases and by which engineer identified the most critical bugs in the cycle.

Try to get more control of your test suite, know what you’re actually executing and again, going back to the second slide start small. This is one way of gaining control over your test infrastructure. The test lab and the test environment, you need to have a very robust environment especially when you’re talking about mobile and web. There are a lot of things that may interfere with your test execution like pop-ups, incoming events.

The device whether it’s still in the previous screen, from the previous execution or it is on the home screen, as part of the controls, you need to also control your test environment and Perfecto together with Apexon, we are deploying in the market a cloud-based infrastructure just to actually overcome this challenge of maintaining a test environment that constantly changes, constantly being impacted by market events. The test environment, how the test case starts to learn whether if the platform is in a ready state mode or not, this definitely impacts your entire test suite reliability.

The next thing and I definitely want to hear some comments from you manage because I’ve just read one of your recent blog posts about the machine learning and the AI. I think that when test execution tests and test switch are evolving and growing, that’s when you become more challenged by managing and taking decision. I have a very good test weight, but I now need to shrink my execution by few hours, by few days. How do I take a risk-based approach and take a decision on what test will provide me the exact same value as I don’t know a bigger test suite and which platforms are the most relevant for me and for my target audience to focus?

How can I shrink the other overall time it takes me to develop the test code in a reliable way? What we see on the right bottom end of this slide is the magic quantum or hype cycle visual from Gartner in 2017 and it shows the evolvement of deep learning and machine learning methods techniques. We see vendors looking into these tools today in order to optimize their overall test execution.

Manish, what do you see today in the market on machine learning AI and how it relates to intelligent test execution? Manish, are you still on mute, maybe?

Manish: Yes, I’m sorry, I was on mute. Around two main areas where we have started to leverage machine learning and AI. One is in the area of optimization of test cases. The problem that I talked about where you have a high number of test cases and you don’t know what part of that technical data is useful.

There are techniques using AI that would help you optimize the number of test cases. There are basically advances both in commercial as well as open source areas that help you now optimize these number of tests. The second is when you run a lot of tests, you need analytics to parse through the results and automatically pocket problems in a specific area so that you don’t spend a ton of time.

Deep learning is very useful in that area because it can start to over time predict why your tests are failing and which tests require more attention and are more brittle than the others. These are two areas where we are actively using machine learning and AI.

Eran: I can say that interface, we also see the trend. the growing trend in these, I’d say techniques and we are looking mainly in the reports themselves because I think that when you have the analytics on top of the test reports in a way to show you how efficient you are in testing against the different platform, definitely you can get the ROI formula test automation. Definitely, this is something that we will see I believe going in 2018 as an aid to the test execution management pane.

We talked a bit about evolving and maintaining test sets and I will address that from actually two different directions. One of them, we see today a challenge in the open-source technologies. We see today tools like Selenium and Appium which are perfect, they are very mature and they are providing great value to the test and developers.

However, when you see the gaps that the platform that you’re actually trying to test against, if you talk about Appium, what you’re trying to test with Appium against mobile, you then start to see that Appium falls a bit short when you’re trying to automate things like face ID, fingerprint, things that are on the settings of the application. Playing with the Wi-Fi and different, I would say, orientations of the device. Though some ways of automating them with Appium but it’s not easy. When you don’t have that much time to investigate, you fall back to manual testing and this is when you are, again, behind the schedule.

I think that today we see, as part of the challenges, a gap in the tool switch, especially with regards to open-source, organizations, are trying to close them either by developing features on their own. Here in Perfecto, we take open-source, we call it enterprise great open-source, we add on top of Appium, Selenium, Expresso, SQL and almost any other leading open-source, these environmental in a sense or best capabilities that are missing. We support our customers to add more automation coverage into their switch but that’s definitely a challenge.

In addition, what we are seeing is, and it relates back to the previous point on the open-source, is the team moving towards a mix of tools, commercial and open-source just in order to evolve the entire strategy. What you see on the left screen, that’s a visual taken from our recent E-book and it was actually provided to us by USAA. The test architect of USAA, Bryan Osterkamp came up with this approach, which I like a lot and he was saying to that as an infrastructure at a freeway.

He believed that if you provide the developers and test engineers the ability to choose, like in the freeway, whether they want to go on a toll road and that will maybe bring them faster to their direction, to their goals, they will poorly pay, that’s commercial tool. They will pay for their toll and they will go faster.

In a different way, if you want to, let’s say, take a shortcut and use an open source tool, it’s also fine. It will maybe bring you slower or faster depending on your needs but that’s the beauty of it. The team have the independence to choose based on their requirements, based on their end goal is to use the open-source or the commercial. What we actually see, people use both. They take the mix of commercial and open-source as a way to evolve and maintain and grow their test for the activity. That’s definitely a recommendation.

Let’s talk a bit about BDD like I’d say a list to all the quantum framework. One of the biggest challenges that we touched on was the time, the title release schedule. One of the solution or recommendation to overcome it was moving to ATDD, BDD approach. InfoQ just released, this month, this nice research that they’ve done showing that BDD actually passed the chasm of, I would say, in the market and they’re reporting huge adoption by the market in these tools, in these techniques just because of, I would say, the time to market that people have challenge with, the skill set and the misalignment or the need for alignment between the business architects, the tester, and the developers.

Definitely BDD, I would say, it’s not a new concept, we know about Cucumber for a long while but definitely, it’s becoming more and more associated with continuous testing and DevOps as a way to overcome some of the challenges that we have discussed so far.

With that, let’s jump to quantum and soon Manish will drill down and show you how this actually works. Quantum is an open source test framework that was built by Apexon and Perfecto took this framework to the next level and edit or connected it into the cloud enabling customers to leverage BDD as well as Java and others, I would say Selenium based development languages to write test automation and run it at scale on the cloud against real devices and also browsers.

The way we are leveraging this framework is that underneath we have the Selenium and Appium test framework that it’s a standard implementation of Selenium and Appium. On top of that, we have the Perfecto layer that enables you to develop predefined steps. I will show you soon how it looks like. In addition, you can add Java-based code or simply engage in BDD test scenarios to execute them, either through Testing or Maven or Jenkins at scale against the Perfecto cloud. At the end, you will get our standard digital zoom dashboard reports that you can drill down and find the real issue.

In addition, we allow customers that use this framework to work with an object Repository and an object Spy talking about the testability. You can definitely and very easily build your Page Object Model and move your automation to the next level. In Cucumber, everyone who’s familiar with Gherkin, nothing new here.

That’s a standard implementation of Cucumber given then when that’s the implementation of a test step or scenario but what this framework is doing is that you have mobile and web-specific predefined steps such as click, swipe, type and this kind of thing, that coexist with logical steps that you customers or the audience can develop in Java or a different Selenium supported language and like you see right here, that’s a custom test step that you can develop that supports your business logic of your application.

As mentioned, quantum also comes with an object repository that connects to the Perfecto cloud, that means that you are getting a mobile and web object spy to identify your XPath, CSS, object identifier, whatever you want to use in a much stable way and this is how it will look like. I will leave time for Manish to demonstrate how it’s actually running in life.

Again, that’s a standard BDD Cucumber. If you are familiar with Cucumber, there is no knowledge gap here. You simply get up and running. The setup is quite easy and then you just need to set the environment, the connection to our cloud and start developing the test cases or if you need to develop custom steps, you will need to either develop them on your own or use a black bear or your developer to complement your test development with the logical steps.

I will skip this one because we’ll leave it to the demo but as a project structure, what you will see soon in the demo that you have a Cucumber, a BDD feature file written in a Gherkin language, you have a step definition written here in Java that provides additional capability that you can call from your Gherkin script, you have the utilities, that’s the Perfecto extensions, how you walk with the Perfecto environment and stuff.

In addition, you have the application properties, that’s the configuration of the cloud, all the repositories that you want to work with and you have the form XML or the TestNG, data providers whatever you want to use to execute it on scale and in parallel.

This is how a config file will look like for our Appium versus Selenium. The quantum can allow you to run on a responsive website, on a mobile native application or a web application, you choose and you configure it based on your target application and it’s very simple. It’s the capabilities that you need to set up like a normal Selenium and Appium project. This is how the execution side will look like. Again, I’m almost positive that Manish is going to show it to you. If it’s not being introduced, I will come back to it towards the end of this webinar. Let’s just analyze what is quantum before we drill down into the demo. Quantum is a Cucumber BDD based test framework that allows test engineers and developers to build or bus test execution with a page object repository, Page Object Model if you like, supporting the standard test environment. It works with any ID, Eclipse, IntelliJ, Android studio whatever ID that you use.

You can really work with the leading execution framework such as TestNG and Maven, you can plug it into Jenkins and run it as part of your CI projects and it’s cross-platform, mobile web, responsive web. If you want to start playing with that, we’ll give you more information but you have the project quantum dot.io. That’s a dedicated website to get you started with if you like. Again, it’s open source and you can get up to speed very very fast.

With that Manish, let me just hand it over back to you to do it down into a live demo.

Manish: Thanks, Eran. Let me share my screen here, Eran, let me know if you can see my screen.

Eran: Yes, we can.

Manish: Awesome. As Eran was talking about, this is the site from which you can– This is a GitHub repository, as well as quantum dot.io, is a site where you can actually know more about it. You can download a starter kit. As he mentioned, everything is open source. Of course, there is no license cost. Perfecto and Apexon work together on this framework basically to enable our customers get jump-started on their automation projects because we see the team struggling with framework related issues over and over and spending easily six-plus months just setting up the framework elements.

Let me quickly walk you through the framework I’m using it in Eclipse. Like Eran mentioned, it doesn’t have to be Eclipse. If you are used to using other IDEs at your company, you can certainly use IDR or any other IDE. Once you download and install quantum related plugins and jar, it will basically set up inside your project like this. For example, here I have set up the quantum starter kit project and as he was mentioning, there is a structure that it creates. Let me quickly walk you through a couple of test cases starting from what a BDD might look like.

Again, this is a starter test case so it’s not very fancy but it does do the trick of explaining the concepts. This is a very simple test case which basically is a TM based test case and it exhibits calculator functionality. Fundamentally when you run this test case, it would open the calculator and do some operations on it and validate that the operations are successful. Here it’s adding three to five and comparing the result.

Few things about the capabilities here, even though this test case is not showing this to be data-driven, the framework allows you to completely take these values out from the BDD scenario and take care of it in multiple ways.

You can put it as examples right here or you can put it in a data file whether it searches on file, XML files, CSV files or even a database. It would read data at runtime from an external source. You can put a lot of metadata around this. Again, this example here I have an example that shows some of the metadata on the BDD. You can put all kind of custom metadata on the test scenario such as giving it a priority, giving it a name or a component or module, what-have-you.

The benefit there is that you can use that metadata to create test suites at runtime. You can basically say that I want to run all my T1 test cases that are of a certain module that it will just run those. In just like in any BDD sense, these steps are actually defined in Java. As you can see, we are tagging the steps to the coded steps and these are the steps that are doing the operation.

One of the other beautiful features here in the framework is that it allows you to late bind the target device. This particular– Typically what you would find is that when you create a script, you created hard-coded to work for either iOS or for Android and pretty soon, your test cases double or triple just for that reason.

Here, the framework really allows you to script once. This BDD, I’ll show you that it is actually late binding the resources such as the locators et cetera later at one time and thereby it is actually just allowing you to create one script and at runtime, telling the system to run it for either on iOS and Android. This is how it gets done.

If you look at our resources file, it defines the resource for an android and iOS and it says that what are the device-specific details for that particular resource and also it defines the locators because locators essentially are what is different from one platform to another platform or one browser to another browser and so forth.

It also allows you to create common locators. If there are locators that are actually common between the two or more platforms then you can create common locators and then you can specialize them for a specific platform. How you bind it at runtime is like this. You create a configuration file and from the configuration file, you do a few things. You tell it what resources to use, what test cases to run and this is where you can use the metadata to at runtime define the groups that you want to execute. You tell it what-what target devices to run it on.

You can either hardwire the devices like in this particular comment the way we have specifically picked the device ID or you can do it on by virtue of defining capability. What it is saying is pick any iPhone that fits these set of capabilities or this could be a browser, this could be any string that describes the target capability.

You can do it in multi-threaded way. You can describe that these tests that qualify in here are going to run in in parallel mode. It will start multiple test cases in parallel and you can also further define the threads to use et cetera. Am I missing anything in particular or should I just run it?

Eran: I think you can run it, Manish. There are a few questions coming in and I’m addressing them but yes, you can definitely run the demo up.

Manish: Let me run this. What this will do is actually start to execute the test case. I have the Perfecto dashboard here open and just like you saw in the config file, it is going to open multiple devices in parallel and it will execute the test case and inputting the results.

It picks two devices because we said any two devices and it picked the one Samsung Galaxy and one Apple iPhone Plus and it starts to actually operate on the tests in parallel and it’s picking the locators from the resource files at runtime like we talked about.

While it is running, let me just jump into the result file that it would produce. It produces a result file like this and it’s still running. Of course, right now, you see what is being run so the result actually, it captures all the environment details so what these tests were run automatically. The framework does all of this on its own. You don’t have to do it. It also captures the test case, it links the Perfecto reports.

Like Eran pointed out, Perfecto captures all the details of the test case run with the video and with the specific BDD steps that we actually executed and it’s all kind of brought together. It helps debug things very easily. We also capture all the steps as part of this report, so you can see the specific BDD steps and you can see the test cases assertions that we have put in each of the BDD steps. If it had failed, it keeps configurable to capture screenshots and the related videos.

I’ll pause there. I think I can’t do justice to this demo in 10 minutes, but if you want to see a detailed demo, do write to us and we will set up a one on one specific consultation with you on how to leverage this framework properly. Eran, over to you.

Eran: Thank you, Manish. I think that your demo actually addressed a few interesting questions. Let me just present again my screen. While we are actually sharing with you guys online some of our resources that can help you get started with automation. Everything here is a free download for you guys, so we encourage you to take it. I just want to address to you, Manish, and then we can have a conversation, few questions.

Some technical questions came in, which languages quantum supports. My response was that currently, this framework supports Java and Cucumber BDD. If there is a need or request to support other languages like C# that was asked in the chat panel, we can definitely look into that. We just need to understand the requirements. Things like C#, Python, other that are non-Java test languages, we can definitely look into them. We just need to understand the requirements. That was one question.

Another question that came in, maybe Manish you want to take it and I will support you. How do you recommend, regardless of quantum, to handle dynamic content? You have web pages that constantly changes, especially for test automation, confuse test automation, that’s a challenge. Do you have, from your experience, a recommended practice?

Manish: Yes, there are several things we do. One is definitely build strong relationships with developers who are actually changing the code so that the changes are a bit more predictable. Quantum, as well as several other frameworks, have predefined patterns. There are best practices on automation such as the Page Object Model, BDD itself lends itself very well to manage the changes better. The key is to actually not replicate your code and reuse it within the application, so if changes happen, you are changing it in one place and not in too many places.

For example, you saw the test cases that I showed, we didn’t replicate the test case. We had only one test case scripted at once and now if something changes, I just change the locator and not everything all over.

Eran: Just to build on top of what you said, having the relationship with the persons that are responsible for changing the content definitely is one step towards managing the test automation suite. In addition, if you can predict what’s going to change based on a pattern, you can definitely build this into your test automation coding.

Another question that came in was about how easy or is it possible to convert an existing framework to quantum? You want to take this one, Manish?

Manish: It depends if your existing framework is using the technologies that quantum is using. For example, if you’re doing anything around Java, leveraging the code that you have already written is easy because Java quantum is also Java-based. Java still happens to be a very predominantly used language for coding automation. Slapping on top of it BDD and all the other best practices like page object et cetera becomes easier, and you know the reuse from your existing code assets becomes easier. If it is completely something else, then I guess probably the better route is either start fresh on quantum or identify the best practices in the framework you are using.

Eran: We have two more minutes so I will take the last question. There is a very long question about the future of mobile testing with different devices or large number of devices that you need to test, UI testing versus non-UI testing. It’s a very philosophical question, where is the mobile testing going?

I think that it’s not going to an easy place, if you ask me because what we are seeing is that there is no such thing, if you ask me, such as mobile testing. Everything today is digital. Every application today can be accessed from a mobile device, from your desktop, from a home assistant like the HomePod or Amazon Alexa.

Every application has its own extension to a different screen or to a different device. At the end of the day, you need to have the ability to build your automation in a way that can be tested against all these platforms. What I’m actually seeing from Google– We mentioned Puppeteer as a way of addressing web testing for developers to headless testing. I’m actually seeing the goals in progressive web application.

If you go today to google.com, I don’t know how many of you notice, that’s a very wide and popular website, but it has a built-in microphone for voice injection and a camera for image injection as well. Google believes that, as what I’ve just described, that everything is kind of mobile today, even the web, and it’s starting to bring more capabilities and sensors into the web environment.

To the question the guys online are asking about the future of mobile testing, I would say it’s the future of digital testing and it involves mobile, web, IOT, home assistant devices, connected cars, everything is actually one world or one environment for you to address if you want to be in the digital winner’s side.

Again, I see plenty of questions coming in as we still speak but what we will do is we’ll address them offline because we reach the top of the hour.

With that, I would like to thank everyone that joined. Thank you, a lot, Manish, for being here with me and running the demo and supporting the content.

Again, we have some very practical advices and resources for you to download if you’re interested and we look forward to see you in the next webinar. Thank you, Manish, again.

Manish: Thank you, Eran.