See how a company used AI-powered testing to meet unique challenges
Craig: Good afternoon and welcome to our webinar. The Future of Test Automation is AI – Are you ready? My name is Craig Sparks and I’ll be your host today. Today’s presenters are Ashok Karania and Apurv Doshi. Ashok is an entrepreneurial leader with more than 16 years of senior management experience in strategy, sales, alliances, and client relations.
He works with fortune 500 companies to startups to actualize new ideas in digital transformation and accelerate innovation. He has delivered success stories in high tech, vintech, telecom and retail industries using a consultive solutions-driven approach. He’s also spearheading the Global Alliances and strategic partnerships here at Apexon. Apurv Doshi is a creative thinker who loves to leverage technology in solving real business and life challenges.
Over the last 14 years, Apurv has crafted unique solutions, developed cutting-edge software products and lead quality assurance initiatives. He has a strong understanding of finance, healthcare, and the telecoms domain. He’s heading up the Apexon labs in India and innovation cell of Apexon. The highly geeky solution architect is currently focusing on deep learning, data science, and machine learning.
He’s passionate about sports, stock markets, and innovation. Before we begin, let us review some house-keeping items. Firstly, this webinar is being recorded and will be distributed via email to you, allowing you to share it with your internal teams or watch it again later. Secondly, your line is currently muted. Lastly, please feel free to submit any questions during the webinar by utilizing the chat function at the bottom of your screen. We’ll round all questions towards the end of the webinar. We will try our best to keep this in a 45 minute time allotment. At this time, I’d like to turn over the presentation to Ashok.
Ashok: Hey, Craig, thanks for the wonderful introduction and opportunity to share our insight and experiences with the team. Hello everyone and a very good afternoon to you on a beautiful day. Artificial intelligence is an interesting topic, which is transforming businesses everywhere and I’m here today to talk with you about how we apply it in the testing world.
I have also with me my cohost, Apurv Doshi who has implemented this successfully in many organizations. The idea today is to see how we can apply it in real life scenarios. The first is going to be on the life demonstration and how do we implement this in your organization? What are the stepping stones to leverage the benefits of AI in testing world? AI has been everywhere. AI is changing the whole business areas.
There’s no part of– not only business but our personal lives, which is not touched by artificial intelligence. Is it hype? Is it used or abused or is something for real. What I’m dealing with, artificial intelligence can definitely augment the human endeavors and create greater impact in all aspects of life, including business. It’s no different on the testing area as well. Now, if you look at artificial intelligence in testing, when I was speaking with various friends, various colleagues, potential clients, customers. Our conversations already horn around how the overall testing world has been transforming, how the digital revolution is sweeping changes across different business processes.
The digital revolution has created new sets of challenges and the VP of engineering, the VP of quality, the delivery director are looking at delivering of @Scale. There were high pressure to time, to market. The new generation interfaces, the new devices, the new digital test one to interact with customers, bring the challenge of their own. DevOps, agile way of working, it is a way of life nowadays and with releases every two weeks, there are tremendous pressures on the digital quality areas. Now, to cope up with those two weeks of releases, your manual efforts are not going to be sufficient.
The huge backlog of test cases, the huge Quantum of Data and also, limited set of existing tool knowledge create a perfect scenario to introduce Artificial Intelligence. In my experience working with mid-size to fortune organizations, AI and Machine Learning definitely bring immense benefit to cut your backlogs, optimize overall the testing process and ensure a better product to your end customers.
There are of course, areas where Artificial Intelligence in testing is very essential, it’s not a if question, but a when question for many of my colleagues and friends in the digital world out there. We’ve seen Artificial Intelligence to provide very insightful knowledge from the historical data and Machine add a depth to spotting those trends, spotting those defects and in this way, they augment the cognitive abilities of human beings. They can analyze complex data. They can predict those failures. They can identify redundancies, all in this great areas, time to market, time to release is of immense importance. This ability of machines is very beneficial for us.
What has been our approach? What we have done is, over our last 15 years of experience, we have understood, what would be the areas where Artificial Intelligence and Machine Learning would transform the testing area. It is the area where you can combine the different tools, knowledge and process efficiencies to bring out meaningful results and augment the quality processes.
As we have seen, the business side. Whenever I talk to the VP of businesses, when I talk to the digital leaders, their demands are always the same. Ashok, how do I get the product to market faster? How do I respond to the ongoing changes in the beta version? I have to worry about the bots. I have to worry about the new available devices. Can my automation framework be responsive to this? I don’t have enough time to scale my team. Is there an interesting way to look at this base testing? All of have important questions. We have seen that Artificial Intelligence can respond to specific areas, to respond to those business needs.
What we have called Astute is nothing but a collaboration of three areas, which is accelerators, processors and services. It’s going to be a combination of three things which will help you to bring Artificial Intelligence benefits in your testing processes. Let’s look at some of the accelerators in detail and after this slide, we will go into the live demonstration because, the proof of the pudding is always in eating it and we would like to show you how this artificial intelligence uploading testing can bring benefits to your organization.
Now, if you’re looking at my screen, the process is very simple. The approach is very simple. We have our test data in different despairing systems. We have information which are in different tools, different tool sets, sometimes in excel sheet. The whole thoughts process is to bring everyone on a common platform and that’s where we use migration of the source framework, access to the target framework, it’s an important exercise by itself.
The second area is optimizing the test cases. Let me tell you, many of the customers, the way we have started working with, within three weeks, the test optimization bring so many benefits that they’re saying, even if you don’t complete the whole cycle, this is a result in excel for us. We’ve been working with banks, we’ve been working with retail organizations, all the years, the test case get different duplicate test cases out there, different steps there which are related in a way.
The test case optimization bot or the best optimization bot to what as well call it. The duplicates those test case so that it helps you to componentize those and optimize it. We have seen benefit from 25 to 35 persons, almost 30-35 persons of the first cases are redundant and in a way, it helps you to improve your relay cycle by one-third, it helps you to save your testing time and your processing by one-third. No wonder those VPs of quality are happy that at least by working on this road to AI, at 30% efficiency is out of the box for us.
After this box, which is an accelerator in itself, we look at the test environment set up, the way we are looking at configuration, setting the right execution environment, we look at creating the test automation to platform and we put automation auxiliary to that and this is the area which you’ll see in more details as we go on to the live demo. We also validate the coding standards to the acceptance criteria and we also have acceptance bot, a bot which ensures acceptance is automated process for you.
Finally, the most important area that we are looking at is prediction and prescription. At the end of this whole cycle, we will be able to be a situation where we can predict the future defects. Based on the last eight cycles, it can determine the life-cycle, these are the following defects which could be emerging out of the test run, and based on those predictions, we can take corrective actions. This Artificial Intelligence will also give description, what are the areas of testing which needs more focus? Maybe, I need to spend more time on my UX, maybe I need to spend more time on the android handset, maybe I need to spend more time on a particular module.
It also tells us what type of testing is required and what type of testers should be spending more time and hence, it helps you to analyze the results and make those predictions. Again, there is a powerful tool which helps you in your decision making. If anybody has a question, please feel free to put in the chat box, otherwise, we definitely have a Q&A session at the end. At this point in time, I would like to bring this into more action for you and see how Artificial Intelligence forward-testing can be implemented in your organization. At this point of time, let me invite my co-host, innovator of Apexon, Apurv and he’s going to guide us through the next 15-20 minutes by giving us the context and then into the live demo. Over to you Apurv.
Apurv: Ashok, thank you so much for that wonderful background and the context. Before we start, I’ll basically put some of the context for predictive and prescriptive here. Before we reach out to that, in the software industry, how can you basically give the definition of a quality software. It’s something like a product should be bug free, it should be delivered on time, it should meet the requirements and also, it should be maintainable. These are the parameters, the infrastructure, effort, time and the requirement which are derived in this quality. Now, the problem is that all these four pillars which are driving this particular software quality, they are not working in the homogeneous way.
Let’s assume that our requirements are constant and, we want to basically go with minimizing all three pitches, effort, time and infrastructure, but unfortunately, this is not possible. If you want to make the effort and time on lower ground, then you need to have a really high infrastructure. If you want to have efforts and infrastructure in the lower bucket, then you need a large amount of time. In the same way, if you want to have the time and infrastructure on the lower bucket, you really need to spend a really high amount of effort. This triangle basically makes the trick. How to convert that into the regular practice so that we can at least, least amount of time in each of the three ans still get the desired results. This can really be a true big help of the combination of data science and machine learning. The demo which I’m able to show you over here, it is basically divided into the three parts, the first part will be the data pipeline, where we will factor live data from the JIRA, however it could be fetched from any of the system like ALM or Lotus note or Base Point or whatever data particular system or particular software or any other industries are using. We will basically first fetch the data, if they’re coming from the different sources, we need to aggregate the data, you understand the domain, you prepare the data and the reason is that, the data which we feed to the machine, if they’re not in the reasonable amount of quality, we will not get particular output, it’s a simple rule like garbage in and garbage out.
To make it clean, we need to detect the anomalies, you need to detect the imbalance between the data, you clean the data between the test data and train data, you train the model, you evaluate the model in front of the testing data. Once you get the desired amount of accuracy, then, you will help to move the products and related data. This is the typical cycle of the data, once this data is there, we will basically give it to the machine learning model to predict the risk of the models which are present in our product.
Then you can also predict the total number of defects in the upcoming version. Combining these two, that will lead us to the prescription which Ashok has already mentioned that what kind of focus that we should put to have the highest amount of accuracy with least amount of effort. I think we have now enough context. Let’s move to the actual demo in place.
Ashok: Correct, I think Venic has a question that, how can we use EI ML in generating the test cases itself, is the question, maybe we can also look at answering that as we go in the live demonstration.
Apurv: Sure, Venic I will come to the answer once we finish the demo part. In the first step, I will basically enter into the provision that I’m going to present. Then, as we have seen in the diagram part, we will start the data pipeline or data cycle so that we can basically generate the reasonable amount of quality data that we can predict to the machine learning model and we can get the result. In the first step of the data pipeline, we need to aggregate the data. The data which you’re going to see right now over here, this is the product data for our internal products, we name it as few metri product, which is a great management too. The data is fetched from there.
In the first step, we are going to fetch the different version software to have afew metri product. Those product versions are listed over here and the reason to do so is, because we’re following the complete methodology, we are presenting this after every one week, after twice a week or sometimes, two times in a week. This time is not sufficient enough to generate the enough amount of data. In the first step, we will condense all the minor and micro releases into a major release and we will go ahead. To do so, it’s a very simple interface. I’m selecting all the versions 7.1 and selecting it and I’m accessing it with the 7.1.
For saving the time for all of us, I have done this exercise for quite few more versions but for now, I will do just for 7.1 and 7.4, for the rest of seven, I have already done it. I’m submitting it, this is what the results will look like, that all the 6.2.*.* are marked with 6.2 or all the 6.4.*.* are marked with 6.04 and kind of so. This is how we are condensing all the minor and micro versions into the major version. That will lead us to have a sizeable data that for which we can make the prediction. Now, when I’m selecting this version, it will go actually to JIRA system, and it will start fetching the data for the releases.
However, this is possible for any other system, like HCLM or Lotus note or Basecamp or any other ALM system that particular enterprises are following. As of now when we are selecting these releases, this particular solution will start fetching all the data which are mentioned across these releases. Now, in the solution aspect, we have created two parts. The first one is targeting the organization which are processed by mobile. They have just started logging the different data into the system. They are not storing any other data, like requirements or stories or epic or any other thing in the JIRA or any ALM system.
For that, we will just predict, which of the modules in your application are low-risk models, and which of the models in your application product are high-risk models. However, in the second part which we are targeting for the process of mature organizations, we will also let them know that these are the models, definitely are of low risk and high risk, but, if you have planned say x amount of story points. Then you should anticipate some kind of a bug. It should kind of like 100 to 15 or 15 to 20 based on some of the parameters that are associated with this particular model.
These two aspects, we are going to cover in the process of nature organization. Right now, the system is fetching the data from the JIRA for all the releases which we had selected. Here, the data are available for us. The thing that you are looking over here, it has fetched all the data, plus, it has started cleaning up the data. What I mean by clean-up is, we may have issues in the system for which we may not have versions tagged, the reason could be any. Either it is like skipping the process or it could be like a customer query or customer log defect in any of like Xander for such ticketing system, where the customer is not really aware about that what should we put in this particular field.
All these things are basically studied by the system, and it is saying that the bug for which the versions are there and are not a test, this is what I am suggesting. The column version over here, it is showing that this all should be marked with 7.02, some of it 7.01 and 6.1 there. If you are not agreeing, you can go and you can basically override the cure site. The reason is that, here, we are more like playing with the date and the different aspects of the bug status, within which we are deriving that what version it should be attached with. If you want to go ahead with it, you go ahead directly. If you want to override it, you select the required version and you go ahead.
In the interest of time, we will go ahead. Here, the first level of the sanitization will go. Here, we had aggregated the data, we have started preparing and cleansing the data, but first hand, we are identifying this anomaly part. This part is saying that the version 6.02 and the version 6.03. They are skewing the data, and hence, my result may go either of the way. The reason is that the amount of the bugs for the other version like this 6.04/5/7/8/910. 7.0/1/2. The results that I have received, they are like far off as compared to 6.02 and 6.03. Rather than we get our data skewed, we will skip those data from our overall broader perspective and we’ll go ahead without it.
Now, the second data cleansing part that will happen over here is that, a bug is of no use for us if it is not tagged with any of the components. The component means by is like in particular, which area of the application the bug has been generated. If we take a typical example of a banking system, the different modules could be the user management, it could be like permentrum or it could be like bill payment, it could be like check scanning and so many more. These are the bugs where the components are noted. Again, the reason could be a skip in the process or immaturity of the organization regarding the process. Here, the algorithm has studied the data for which we have description and summary available. It also knows that these are the components which are tagged to it. Now, I have few sets of data for which I have somebody and I have description, but I do not have component with me. The first data will be used for the study purpose, and this data will be used for the classification process. This is the typical example of machine learning with help of the text classification algorithm. This is how with the help of the summary and description, the algorithm has identified the component if we are really comfortable with it, we can go directly.
If we want to modify it, because algorithm is not 100% foolproof. That also depends upon that, how we have followed the processes with the existing detail, if it is like sufficiently mature, we may have higher accuracy, and if it is not, then we may need to compromise eventually. In the interest of time, I am not overriding any of the components, and I am going ahead. Again, coming over here. It is now trying to do the data balancing, that whether my data is balanced across each and every model, and it is showing that my automation model, QuickStart model, and Admin modules are really stewing the data on either of the way.
To get the better results, I will basically keep those model away from our prediction part, and now, my resulting data will start getting trained to the machine learning approach and it will let me know. That which of the models are of low risk and which of the models are of the high-risk, and the results are in front of us. The core product, execution screen, generic integration issue. All these red colored module algorithms is identifying them as the high and the rest are low. It’s again, like, AI and machine learning in place. It’s like a typical classification approach where we are by bisecting our modules into low-risk bucket and high-risk bucket.
For 7.02, we do not have the results available into the system, but for the past results, we have the results available. This is what the machine has predicted. This is what the actual ones are, and this is what the comparisons are. The bigger, which are showing over here for which our prediction went through and the smaller are shown, that even though the prediction that we have made it is not matching the real life. So, requirement, logging, JIRA, export and core products. We have identified them as the high-risk model, but, eventually, we have received a low amount of bug, and views was the model which we have identified as the low-risk model, but it was basically converted into the high-risk model, and so on and so forth.
We can go and check the results for all the passwords and also data, and this is what the version for the 7.02. Now, this gives us really better idea that if I am going to release a version 7.02, all the models which are of the right color, which are of this color, we should basically give the high importance to them and, all the green, we should basically give low importance to them. Now, coming back to this defect. This is the one which are basically intending to the processor as mature organization. They are not only logging the defects into the system, but, they are also logging stories into the JIRA or any of the ALM system. They are also tagged with the module.
They’re also giving the story point, at the same time, then they are committing the data to the system. They are also putting proper comment that this particular ticket resolution is for the ticket 101 and, what kind of a changes that you have made. While we are deriving this approach, we are not only fetching the JIRA data, but we are also fetching the repository metadata.
The thing that you need to make sure over here is that, we are not fetching any of the source code. We are just fetching the metadata of the source code, and the metadata is really helping us how to identify that total how many number of files are associated with a particular module. Total how many number of commits are done just to solve a bug, total how many commits are done just to do the development task or do the feature announcement. It is also giving us an idea that how many numbers of developers are associated in the developer for a particular model.
How many number of tasters are associated in a testing of a particular model, and all these things help us to identify two parameters. The first one is the complexity of the model. That is that, what is the amount of complexity that is making this module? The second one is the stability of the module. That, at the time since the product has started, and the modules have been evaluated, what is the stability level, whether they are really stable, moderately stable, or unstable.
Some models are tricky. Those will always play game with us and you will always find bug, even if you make a small amount of changes or large amount of changes. Few models are of really like heaven sake, whatever number of changes that you will make, they will still remain stable. Again, the change of the complexity that you are going to put, that the new enhancement that you’re going to do in the particular module, whether it is like a highly complex, whether it is medium complex, or whether it is low complex. All these parameters are derived right now by the module by fetching the comment from the repository system.
Then with the total number of bugs text, total number of story points that are center module, it will try to generate the bug range for each and every model, and that range will really help any organization to make their resource alignment in a much better way. If I say this module one is meetings 20-story points in the upcoming releases, and the amount of bugs that I may receive is somewhere between 30 to 40. For a second module, I am going to implement again, the same 20-story point, but the bug which I may receive is just about eight to 12. Definitely, first module, which is my like of the first interest. The reason is that, it is going to play that, this is what the machine is saying.
Same thing it is right now will do with predicted over here, and let’s say it’s like a couple of minutes because it is really a challenging and computational heavy process. Before we have that result available, I think I can answer any question. If you have any questions meanwhile, you can put it over there, and I can answer. Venic, your question is that, how can we use AI and AML in generating the test cases? Sure. The first thing is, you need to be really cognizant about the processes. How actually it helps you out is that, then you’re committing the particular change in the code repository. You will make sure that you need to take a proper ticket with it.
Let’s consider your ticket number is 0101, and that 0101 ticket should be matched with the proper tagging components and the affected version, that I need to address this ticket into version 1.0 and the module which is associated with it, it’s the name M1. M1, these details are available. You can easily make sure that when I am checking my past results, particular model M1 is changed, what are the amount of defects that I have received, whether it was like a blocker, critical, major, and minor? What are the other test when preset that I have in my system present, and basis on that, you can easily identify that these are the best cases which are like high priority test cases.
These are the test cases which are of medium priority and these are of the low priority and data set, you can basically select your test case for that particular one. However, if you are thinking that AI and AML system can automatically generate test case for you, it’s still a big distinct dream for all of us. We are trying to see that just with like looking at the change in the source code can we generate the test cases, but as of now, we have not yet reached to that particular level, but we’re looking for that process. Any more question? I think, if it’s like taking more amount of time, let me switch to the product for which I have keep the results ready, and I can go and take you over there, and I can directly show the results.
Ashok: We’ll also be covering Vinit and others, how we can start implementing this? What are the baby steps towards implementing the AI in your organization? I’m glad it helped. Thank you.
Apurv: This is how you would have seen the results. Here, I have taken the data till 7.4. Again, then I’m on the 7.3. This stick shows the prediction and the triangle shows the actual ones. For the Test Suite, somewhere we have predicted that you may receive 40 to 60 bugs, but eventually, they have converted into 81 bugs. For the Test Case, it is like a quite a well in range. Same for the Report. For the Requirement, we are just a bit shorter than what we have predicted. Issues, we are like a quite well in range. Same for Code Refactoring. Integration. This is how you can basically predict the bugs at each and every version and you can go.
Now, once these results are ready, you can show the prescription part. The prescription over here that you can see the Complex Core Module with sizeable work plann, the risk level, the stability level. This is that you are getting the prescription that functional testing with high, medium and low priority test case is required. For all these modules, early regression cycle is also required. Same prescription, we can go in the past and we can see as well.
Here the requirement was really unstable one and here it is also suggesting that you should also allocate the QA Eng 1, and that is basically like he has checked all the tickets and made sure that all these resources that are basically marked over here. They have identified some really crucial or critical bugs and hence, you should basically provide the testing of the requirement module to those resources. Coming back to the latest one, which is 7.4.
Ashok: David that’s a very good question and we’ll come to that. Thank you.
Apurv: Yes. David I will come to it. Basis that, it is also giving us the optimized based configuration and, how actually it will generate? The prescription that you are going to see over here that will be translated on your test cases, while you are creating your automated test cases, you need to tend it with high, medium and low prioritized test cases. Let’s consider you have the module Test Case and when you are automating this module, you will write it test case high, test case medium and test case low. You will bucket your test cases into three buckets and, when I will open it, it will give me better idea that this is what the heath of my module that integration module is my really core module of the business.
Sizeable work is planned, this will be derived based on the story points of the upcoming releases. In the past, there is high risk associated with it. It is stable, the reason is that, the name of the column is unstable and it is giving the green mark. The changes which we have planned in the upcoming releases, they are really complex. It leads me to have that I should perform all high, medium and low test cases in my daily regression cycle. When we go further, then you will have a more stable module, so you can consider more in the green color module. You will have to perform only high and no need to perform medium and low test cases.
If I will press this Start Test Run, it will actually basically trigger our Dentim system, which will start testing the bug based on this configuration that we are providing. I will basically definitely not go for it, the reason is that, it is not intended as of now for our system to trigger the test run. In a nutshell, starting from the data fetching till we generate the optimized state of test case configuration, we can leverage the machine and we can take advantage of it. We have one more question from David that at different moment, you have kept ignored and deleted set of data that was queuing the result. Isn’t it a risk to actually introduce a bias in the analysis and miss something? Definitely. The answer over here is that, the data which are like more homogeneous in nature usually basically do it in two phases. Without having that actual data in picture, and the second one, you still consider the data, which is queuing the system and you analyze the results. So now, you will have two sets of results, you compare both hand in hand and then you check that in what way the data is taking over. Like, sometimes, when you’re ignoring a particular version’s data, you’re like ignoring quite a bigger bunch, provided that the screening was happening on like a higher set.
Sometimes, screening is happening because you don’t find sufficient chunk of data and hence, it may go in the lower set. It is better that we got the results with all the homogeneous model and then, we introduce all the ignored models and you check the results. Definitely like, we’re introducing the bias in the analysis and we are missing something, but hand in hand comparison can help you to get to the situation and the reason that I have basically skipped the data here is that, because I had known the nature of the data and on the version 6.02 and 6.03.
We have like the data of really of small of size, and hence, while my learning become general clean up, all the models in the version 6.02 and 03 will be started turning into the low risk model that is on your set, it is not having sufficient chunk of data and just to avoid that situation, we have deleted the data as of now, but, yes, side by side analysis will help us more. Anymore questions? Okay. Thank you and, I will now, again, hand it over the control back to the presenter.
Ashok: Thank you very much, that was a very nice demonstration and thanks for answering the questions. Of course, each organization will have their own sets of aspects to be considered but the overall approach has derived a lot of benefit for various organizations where we have worked. we worked about 10 to 12 different companies in both North America and the European region and we have worked with organizations in Britain, in payments, in healthcare, banking and telecom of course, and the results have been great. We have seen overall efficiency from 30% to 40% and that’s significant savings in today’s world.
If you now look at the success stories and we have about 15 minutes left, so I would definitely emphasize upon, what are the important key studies which we did with the leading telecom giant recently based out of the European region. Theirs was a very unique case and I want to bring this unique case because, if we could do this for a particular unique case, it could be then applied for normal banking, normal retail, normal pharmaceutical healthcare, airline situations. There’s been a large telecom company which brings in lots of devices, a lot of application releases.
They test about, more than hundred devices, they bring out new applications every year and they have more than 500 maintenance releases. You can imagine the kind of testing effort which is there, the kind of testing teams which are required. There is of course, a huge testing cost in about a million euros every year, a few million euros rather and they are long testing cycles. We applied this to the device testing and app testing area. The data was also not very organized, so we helped them in organizing the data, we helped them in sanitizing and cleaning of the data and then, bringing in on the right framework.
After that, we had to regulate the very tools which you saw, all the very accelerators which you saw, which was just optimization. The acceptance board, the prediction engine, the prescription engine. We were able to get good results. We were able to get predictions for more than 30 testing areas in the device side. Our risk prediction was almost 80%, our test case failure was about 80%. Because of this, we were able to reduce the testing costs by more than 25%. If there are a few million euros, which is nearing about double digit million euros a year and 25% saving is a good saving. We were also able to help them with the different classification and we analyzed more than 12,000 rules of test cases and also 12,000 plus rules of test cases is huge test case scenario. We looked at more than 100 screens across different testing areas. Based on this, they were able to focus on the right testing activities, they were able to focus on the right modules and as I said, the testing efficiency gained quite a few million euros were saved for them.
We worked with a large national bank, one of the top three national banks and, even this process is on right now. We have been able to remove more than 30% dead test cases, so that’s some feeling which is out of the bulk. We have worked with digital healthcare organizations, we’ve worked with payment companies. In the interest of time, I’ll definitely cover them later and of course, our teams will share this presentation and other materials for you to come back to us.
Now, let’s look at, how do we implement this. The next six, seven minutes, I’ll be talking about, how do we implement in your organization. If you’re a healthcare organization or retail organization or a unique organization, how do you implement it? The first and the most important thing is to understand that, AI in testing or test Automation is first a mindset. It’s not about tools, it’s not about technology, it’s about getting the right approach and the thought process. This is about respect for data, managing the data in the right fashion and then, using technology to augment specifically those areas which are not prone for manual testing.
The ideal candidate to start the projecting your organization from a business perspective are the business critical applications. If you have e-commerce, m-commerce type, if you have types which are your customer service application website, they are the critical areas to look at because, that’s the area that maximum interaction is there, maximum pressure is there and you require a high level of agility. You require model of frequent releases. In fact, I’m working with some customers where they’re having two releases of each. These are the areas where the best ROIs achieved from the business perspective.
From the project perspective, if you have projects which you’re not able to resolve their delay releases and that’s an area where you require more better, efficient processes, those are the ideal projects. Areas where you have high amount of resource allocation, huge infrastructure needs, those are also the ideal candidates. When we started this astute approach, we also thought that we need a high level of automation scripts and those are ideal candidates. But, having worked with some of the recent customers, we have seen even if there’s not high level of automation scripts, even if the level of automation is low, we are able to implement this and bring the desired result.
Of course, projects where you have good quality of data becomes a very good candidate to implement this project. We have also seen this to be a cycle of about a few weeks, but, within a quarter or a few months, we are able to derive the results of this AI testing approach. Usually, it starts with the discovery where you spend about identifying the goals of your project, which apps, which devices, which features you’re going to look at. Is your data sources in place? Many times customers ask, “I don’t use HPLM, many of my test cases are in excel or some tool which is not used by the industry.”
As long as we could have the data in the CSV format and any other format, that is perfectly fine. Then, we do the data pipeline, data mobilization, and the model tuning. In one of the recent telecom companies and also the payment companies in Germany. We were able to get the desired results within six weeks, up to the idea of model tuning. Then, of course, the big one is to scale this because, at the end of this, you want this to be a part of your convenience testing, you want this to be part of your DevOps. This should be part of your overall existing infrastructure and that’s where the scaling comes into picture and in dealing with your automation framework comes into picture, because this is not a hobby project, this is supposed to change your way of life.
We have seen that customers pre-astute and post-astute are different because, after the post-astute, the whole quality department thinking and the change approach constantly changes. Finally, we have maintenance where we make it a way of life and scale it from your different business units to other business units from your one team to other you need and getting the desired benefit. AI in testing is now becoming a reality, is becoming a way of life in today’s DevOps, HR world because of the shared volume of data. The requirement of the quality needs cannot be met just by your manual testing approaches and you need the intervention, you need to leverage the power of artificial intelligence and grow it in your organization. Thank you for listening to us and we will be open to further questions.
Craig: Thank you, Ashok. At this time, we would like to get your questions answered. To remind, you can still ask questions by utilizing the chat function in front of your screen. A couple more that have come in. Thanks for the demo. The solution demonstrated, is that your IP and how do we go by using it?
Ashok: Thanks, I’m glad you like the demo. The solution which we showed is a mixture of three things. It’s a mixture of processes, it’s a mixture of accelerators, and it’s, of course, the dream team’s working on it. The solution which you bring is something which we happily handed out to our various customers as a part of the overall service engagement. You effectively get perpetual license to use the different accelerators and the tool. We also have made some of them open source, especially the automation framework and we continue to do that.
Craig: We have our own homegrown automation framework. Will it work with this software?
Apurv: Yes. It is not about to have a particular system in play. For the entire coalition that we have developed, that can easily be plugged with any of the CA system that they’re using or any enterprise is using. The reason is that, DevOps is right now like mandatory, it’s not like the luxury for the enterprises, they are mandatory processes. Hence, we are making sure that whatever the things that we have shown over here, it was a GUI part of it, but, there is a command integration is also available, so with help of GLI command, you can also do it. The integrations are also central.
For the example, we have shown the integration of the Jenkins. We have not actually triggered it and the reason is that, we really do not want to trigger it. If your automation system is being tasked with some another thing like apart from the Jenkins, any other automation system that you are using, we can basically pass the configuration over there and do the integration with other system as well. They are not tightly coupled with any of the system.
Craig: Thank you, Apurv. That’s all the time we’ve got for the questions today. If we’ve not managed to answer your question, we’ll be in touch after the webinar. Be sure to check out and subscribe to DTV, a new digital transformation channel that brings in industry experts. Many thanks to our speakers, Ashok and Apurv. Thank you, everyone, for joining us today.
You will receive an email shortly with the slide presentation and link to the webcast replay. If you have any questions, please feel free to contact us at firstname.lastname@example.org or call the team at +442038657881 to speak with a member of the team.
Thank you all and enjoy the rest of your day.