A joint presentation with Test Master Academy
Host: Good morning and welcome to our webinar, Successful Adoption of Model-Based Testing in Agile QA. My name is Sarah-Lynn Brunner, and I will be your host for today. Today’s presenters are Anna Royzman and Dharminder Dewan. A quick background on both. Anna is the founder of Global Quality Leadership Institute, a non-profit organization, whose goal is to become the leading advocate for quality and technology through innovative educational programs.
Test Masters Academy is an initiative that was envisioned and managed by Anna directly. Anna is a renowned international speaker and a recognized expert in software test leadership. She holds various test management roles for 14-plus years, served as the Executive at large on Association for Software Testing to board of directors, as President of ASP Quality Leader, and as a Software Test professionals Community Advisory Board Member. Anna is also the co-founder at NYC Testers and the founder of TMA’s Leadership Series meetup. Welcome, Anna.
Anna Royzman: Thank you.
Host: Dharminder is an Agile QA Manager at Apexon. Dharminder carries more than 20 years of IT experience in architecture and design and QA, QE in automation areas coupled with product and project management experience for various companies. While working at different technical management positions with companies like TCS, Fujitsu, OpenText, Dharminder has led and guided the transition from Waterfall to Agile methodologies, and strings faster time to deliver and deploy in various fast-moving environment. He has an excessive experience in DevOps and shifting quality left in Waterfall models leading to hybrid and fully Agile model. Welcome, Dharminder.
Anna Royzman: Welcome to the webinar, everyone. Today, we are going to talk about model-based testing. Our agenda for today– I’m gonna do my part of the presentation then I’m going to turn the mic to Dharminder, and he’s gonna talk about some use cases that they implemented with Apexon, and some practical advices that came up from their experience. I’m going to talk more about the big picture what is model-based testing, what are the pros and cons of model-based testing and what to watch out for, what’s the implementation approach, et cetera.
I’m going to, also, discuss some successes and failures, I would say, in implementing this approach. With that, we are going to start. I’m going to take control and we’re going to go for it. Okay. In a nutshell, model-based testing is the implementation of a code that models the behavior of the system under test, replicates the behavior, and then it’s run against the system under test, some actions are being run and then validations happened, comparing the state of the model and the state of the system under test.
That’s in a nutshell what model-based testing is. In order to, let’s say make it successful, there are several attributes that you need to consider when you are deciding on model-based testing approach. One major think about model-based testing, is obviously enough to make a testing and it’s obviously a code. As with any code, we testers know that we need to test it. Some validation tests against the code itself needs to be implemented as well. Do not blind your trust anything that is machine-run.
You probably know this already, I’m going to reiterate it because it’s really important for testers to understand that anything you use you cannot trust. You need to validate. When you will consider a model-based testing approach, implement some unit tests for sure because that’s really important. You want to trust your model to perform the testing for you. Number two, the model has to replicate the behavior of the system. Meaning, that it’s version specific. If your model and your system under test are not in sync, then you’re going to have some false positive or false negatives. Watch out for that. When you have major changes in your validations, in your features from version to version, you definitely need to keep your model up-to-date because if you’re not, then this tool is not useful for you. In order to implement the model well and do not worry about whether it’s wrong or right, you have to have an established system under test.
Meaning, especially in Agile, some of the features that have been implemented are supposed to get some customer feedback, meaning they could change. I would advise against implemented model against something like that if you know that something is going to change. Because then you need to adjust your model to certain behavior that may or may not be there tomorrow. Ideally the model works much better on something which is called, “Regression” and establish functionality when you need to do a lot of regression testing.
As with any tool that testers use, it has to be ready for you when you need it. Meaning that if you have a release, then this model has to be ready for you. Sometimes the estimation team may not be the team that directly reports into test manager, sometimes the automation team is outsourced. Sometimes they have their own I would say priorities. It’s really important for a test or the test manager to lead for the customer of this model-based test estimation to be ready with the tool when they need it.
I reiterated over and over again, if your tool is not ready for you, then you don’t have a tool. Don’t say that you have it because it doesn’t work for you. Although, saying that I said you have to trust it, it has to be version specific, is it has to work against your version. It has to work against the version that you know going to be implemented, and it’s going to be there to stay, and it has to be ready for you when you need it.
Because with model-based testing there is a lot of code involved and you really need to know that you can trust this tool exactly when you need it. With that, we’re going to go into some examples. Where model-based tests may or may not work. I’m going to talk about some of the experiences that I had, and I tried it in different environments, different case studies. Just to give you a little bit of background, my company.
I worked for the financial company and we started in 2001, and we implemented the version of our system which was extremely complex. We only tested manually. We didn’t have enough testers, never do. What we implemented before we did the model was, we did some group tests for the whole company, the whole company was about 70 people, and we have a lot of eyes and hands on the system in order to find the books.
Well, in 2003 we went through several versions of this system is become more established. We had many customers by the time and the robustness and the trustworthiness of the system become really important for us as a company, because when you have a very new product, it doesn’t matter. Okay. It’s not working, but it’s a new product and you just adopting and it’s okay. When like two, three years in the role then your reputation of your company relies on the trustworthiness of your system.
If something worked yesterday, it should be working today. This is what customers look for. We realized then in 2003 that our system becomes so complex. That explaining it to even testers, it was not next to impossible, but there was a gazillion of different scenarios and you need to know the intricacies of the system in order to understand what exactly you’re testing. Just to give you an overview. We had two on the front end.
It was an interactive system. We have two screens, two clients were engaged in some negotiation scenarios of trading, and each of those screens get about 30 controls. As a tester you need to understand all the intricacies of behavior of every button, the changes of the state from action to action and you need to understand that on two screens simultaneously on two different machines. It really was a very complex system.
It wasn’t a development initiative, it wasn’t even a QA initiative, it came from development. They wanted to invest some money and time to implement the model-based testing in order for us to heat all those different scenarios, those gazillions of iterations of scenarios and have it on our control. Because for developers, this model is as useful, the best results are as useful for testers. We were one team. We invested in the catalyst.
For the half a year, we didn’t develop a new feature. There was nothing on not open source since it was 2003. We developed this model-based testing internally. As I said half a year, all the development involved all the testers involved, we became the product owners of that project, basically. We wrote the spec because we know how the system behaves. We combined both front end and back end in order to give our developers the specs on how to implement the model.
Because you need to understand how both front end and back end works in order to provide information to the development to create that model. This is what we did. I will call it a successful case, because exactly when implemented 2003. It was extremely useful for us because number one, we could give out the creation of test cases to anyone in the company, literally. Because all the logic happened inside the model, or the realization happened inside the model.
We could just give anyone in a company; a set of steps and they just create the script for us and we just run it. As I said, it was a joint initiative of the whole company we were invested in it. This was also for the company to understand how their system works. It was really a great thing for the company at this time, but as I said, company with 70 people. We employed the help of everyone. We created a lot of different scenarios and we had a great, great coverage on basically very complex trading system behaviors.
Another thing, what I can do is this model of, you understand. It’s basically a set of functions. Then you’re doing the data feeds into the set of scenarios and then you can generate different scenarios based on your data. What else you can do? Do combinatorial testing, meaning that you can mix and match different datasets. For example, you can mix and match different parameters, if you know how to do combinatorial testing, using some algorithms, you can create limited set of tests to cover majority of your scenario.
Dharminder is going to talk more about that later. Some of the things that I did is to use this model with the datasets that I needed to test specific scenarios for the specific releases. For example, there was some work on specific feature. When there is some work on specific feature, obviously the regression testing is needed because there is more risk in produce of the quarters that it has been changed. We really need to really dig more deeply how this system didn’t mess up, didn’t break, because of some changes being made. On those occasions, I would generate specific deep-dive tests with really large coverage of this area just for this release. I would throw away those tests later because you don’t need to keep them, but generation of the tests and running them was extremely easy and you can use the model-based testing just for that. In this respect, I would say model-based testing was one of the best approaches that we use when you really need to use a specific release and assess something very deeply. Regression tests not something new.
Unsuccessful case. Coincidentally [laughs] anything that you work with depends on the context. One of the unsuccessful approaches with this model was the intricacies of very little granular changes of functionality state of the system that all needed to be implemented in the model. Which creates very complex code inside the model. We try to implement– so basically you need to put the code into all the validations of the system. In our case it was as I mentioned 30 controls.
There were some buttons enable-disable, there were some displays. There were some texts, there were some chat messages. With chat messages and the texts, that’s where the complexity becomes so enormous that we spend a month on it and we stopped. Basically we decided we’re not going to validate that because there was too many bugs that they introduced by development. It’s easy to do just by looking it over exploratory testing also fail.
Sometimes, the model when you make it too much validation of some features that do not really matter that much, for example, something like chat, it’s not really big deal with chat. If there are some bugs, then you’re probably going to find it throughout your exploratory testing anyway. That was something that we stopped doing. In another unsuccessful case, we had one large system or one large company and at some point, we broke down into product units, so the company basically reorganized into product units.
All this system becomes several different products for several different teams. Each of these teams have their own plan of development, their own product owner and at this point having one model for the whole system was not really feasible anymore. At some point development in each of those units decided to do the direct validation test. They used kind of the model but they didn’t use the automatic assurance, they put the direct validation and this is how the model ceased to exist.
Because we broke down into different product units, there was less coordination between different units. Each team decided that they want to have their own pace, develop their own system. That’s how one huge model was no longer supported and then we split into different kinds of mini models, or direct, or fight that each product team decided to implement. They tried different tools. You can really not tell Agile team to use something if they want to use something else, alternative stuff.
That was one of the drawbacks of having something huge and unmaintainable when you have very much Agile team that go with their own pace. With that what I wanted to say is as with any project, with any development, you need to adapt the process of change. If something worked for your team, use it. If something doesn’t work for your team, find something that works. If something becomes hard to maintain, hard to communicate, hard to develop change it. As I said, in my specific case when we had one model developed for the whole system and the Agile team decided to go on their own process, then this model was no longer applicable, so they switched. The last thing I wanted to say is, considering short term or long-term strategies. When you implement the model, you have to understand that it’s an investment, anything that is automation, development of automation, suite automation tools is really an investment.
You need to understand how you’re going to use this model in a short term and in the long term. B, do consider your strategies, do consider where your model is going to be used.
As I mentioned, I would suggest that it’s much better use in the testing of the existing features of the regression test. Also as I said, it’s really useful when you need to generate very deep coverage for certain like feature. Throw away those tests, it’s fine to throw away the test, but at some point, you need to generate more.
You want to test more deeply, throw it away then use next version. Consider that, consider where the model can be used. I’ll say, if your product is going to be severely change then maybe the models not for you. If it’s stable, if it works, if you just want to build on it, and the existing feature should work with the new releases, then it’s one of the methodologies that could just be working for you. With that I’m going to give the control to the Dharminder and give the presentation to him, you can ask me questions later.
Dharminder Dewan: Thank you Anna. That was I think a great start. Let’s transition into how the model-based testing is working in the Agile projects. Just talking about different phases and Sprint how it is done, and what happens in the beginning, what happens prior to the Sprint. In Agile, in general as you know that we work in Sprints. When we talk about MBT, the most important here is that in the previous script, which is before the current Sprint first starts, at the end of the current screen that we are working on before the next Sprint is, some grooming that happens.
During the grooming we prod Dev and QE they would understand the stories and based on understanding the stories and the requirements and even discussing with the BAs, they come up with the models based on the acceptance criteria. Those are the models which work for the system under test as Anna was talking earlier. The next part would really be that, there’s a quick review of the models that happens.
Then there are test sequences which are generated for all the models which are generated, there are different type of testing sequences which are generated, and that also helps development to identify the use cases and the complexity from the models. Which even they need to validate as part of their unit test cases. Because the idea of the model really is to make sure that it’s a representative of the system on the test.
It should not be the case that because the test scenarios that we create they absolutely 100% depend on the validity and authenticity of the model which is generated for the SUT.
It’s very critical that it is validated. That’s what happens as part of the planning before the current Sprint starts. Then the current Sprint starts, any test is an execution that needs to happen because this is automation here.
We need to make sure that the test data requirement is gathered, and it’s given to the respective team who’s supposed to give you the test data, because there are some service virtualization which happen in some companies. Where you need to put in the request to somebody to give you the data and then utilize those test cases for execution. That’s what happened as part of the spin starting, and then you validate all the data, and you start executing the test cases.
As Dev is writing their latest test cases, the automation is generating the test cases from a tool. One of the successfully advantages of MBT really is, once you have the model you can run a few tools out there using the model, they’re able to automatically generate the test cases. During the Sprint execution when the Dev completes the unit testing and says, “Okay, now QE is ready to run the test cases,” which is the Quality Engineering team. QE would run their test cases and certify the stories based on whatever the scenario which have been generated using the model which was generated prior to the current Sprint, which was understanding these stories and all that stuff. Then once we are done with the execution then we follow the same process again for the following Sprint, which is we do the Sprint, the story grooming, the pre-planning preparation for the next Sprint.
There’s no not much difference in how MBT is implemented in the Agile process. If somebody understands the Agile process, it’s all about when you start the pre-planning, when you do the actual understanding, and when do you do the coding and when you do the execution. It’s very similar. It’s always that whatever the current Sprint is that’s where you start executing whatever story you’ve developed, and then Agile testing which precedes the execution of the automation test cases.
It’s pretty standard from the Agile process. Now let’s look at the different type of tools, what the industry offers to facilitate the model-based testing. The example that I’m showing on the right-hand side, that’s the example which is created using an open source model-based testing tool by the name of GraphWalker. The idea of the GraphWalker is for whatever system you are testing, you put in the information and based on the information you are providing to it, it would be generating graphs.
The graph is in the form of action and verification. There are arrows which are generated. Arrows basically represent actions and most nodes represent a verification. If I just go over the graph which we are talking about here, there’s the first node which is the start node. When you start you just validate, that if the client can’t be running, or the client can be not running. Now, if let’s say the client is not running then you would be given an opportunity to do then you start the client and then it would give you the login prompt.
Now if you get the login prompt then there’s the opportunity either your credentials are valid or they are not valid. If they are valid then you go down to the Browse, which is again the verification. In all these steps what you do is, the understanding of this graph is that there would be an action performed. When you see the clients are not running, there would be a verification saying that there’s that error that pops up.
Then you get the login prompted, and then there is a verification, that means that you would be validating that a login button comes up with whatever screen you are expecting with username, password with the okay-cancel button and all of that. Those verification happens. That’s the idea of a model which is a generic understanding because once these models are created, there are tools who would walk through these models to create test cases based on different paths which are given as part of this particular model.
What this does and if you look at how the system under test is represented, one of the advantages of model-based testing is that if a QE, if a BA or Dev, anybody wants to pretty much understand how my system should be working, if all my acceptance criteria’s are validated or not, if I would be doing 100% test coverage or not because that’s one of the advantages, which the model-based testing offers that you are able to have 100% coverage by ensuring your model is very well-represented of the SUT.
Just looking at a pictorial representation anybody would be able to figure out that well if the coverage is enough or not from the testing perspective based on different parts we’re taking. As we know, it’s majority of the time the way that human brain works is that we are able to interpret images faster and understand them easier compared to the text. A lot of times we get lost in the text. That’s where these pictorial representations are always, always better.
That was one of the advantages of the model-based testing that we have seen. Now in terms of what does the MBT tools offer. Now again like I was saying before. One of the biggest advantage of the MBT tool is the auto-generation of test sequences from the model. Again, these are not the end to end test scenarios that we have in the traditional QA approach. It only crawls the paths which have been defined and described in the model.
This is a huge advantage because one of the biggest challenges for the shift left are going into the Agile and we are talking about two-week releases or three-week releases, or four-week releases, making sure the quality does not get impacted. We are doing sufficient testing and the solution to that really is automation. There is no way within a span of two or three weeks where development itself consumes almost 50% of the time, the remaining 50% is sufficient for us to do the QA, or QR, or all the validation without any automation. That’s a huge advantage in a way we have to absolutely, absolutely do automation to view cycles, and move on to the shift length. Because all of us are aware that the earlier we are able to fix the problems, the better off we are. Because the advantage would be that it’s more expensive to fix the problems later on again that’s pretty standard stuff. The other thing that MBT does is that it would provide algorithms for crawling the path in the model.
Again, these would be the random path, edge coverage, vertex coverage, and the shortest path. Because as we saw in the in the model just now which I showed in the graph, there are multiple areas. You can go pretty much that well, I would have a successful login then I would go to D. Because this was a very small system that I show in the next couple of slides, I would be giving a little more complex system with multiple paths.
That’s where you’ll be able to analyze the value that you can bend algorithms that are generated. It can pretty much pick up any path back from one point one to point B. It can be a successful path, it can be a failure path. Each one of them leads to a verification at the end and in between on every node, because we really want to make sure that end-to-end it’s successful and things passed.
That’s the only way, like Anna was mentioning earlier the validity of the model, the path is very critical to ensure that test cases generated are correct. Of course, we can absolutely generate almost 100% test coverage using MBT tools, because the idea is that again it is automated even the generation of test scenarios are automated. The duplicity of course I would also mention here, the redundancy and duplicity is not the case here.
Because as you would since it would be following apart, all these tools make sure that the rule traverse the same start and end node across the same path again. So that the validation mutually exclusive process that is followed. We of course attain 100% test coverage based on the model again. This is a statement which I would say depends on the model which is generated. If there are scenarios which are missed in the model, then of course it would be the test coverage will be missed.
The patch can be controlled with logical guards and actions, which basically define that the more logical guards and the more actions you define as part of the model. That’s what would be creating multiple paths from the test case perspective. If for example like just now be showed there’s a valid pre-login and there’s an invalid log-in, those become two paths. If there’s any other opportunities or options, there then that would become a third alternative path.
More path you define better off, the value-based testing tools are in generating test scenarios. The last one, of course, is providing ways mechanism for path selection in case of rent. Because what really happens here is, that when you define a weight to a particular path, and I am saying that, “Well, this is of a higher priority to me.” What would happen is when the path is generated it would effectively say, “Well, these are the high priority test cases.”
If I really want to give it a norm that well only execute P ones, then the weight is taken into consideration for executing. Sometimes I want to get the feedback in one hour and my test cases are, so like then say like there’s no way I can execute it in one hour and have, I’ll these types of weight case in my entire Dev suite. That’s what I would be digging up the priority of the weight, making sure this have more waited in this particular scenario whatever the key feature of the test is.
Let me move into some examples where we talk about how these models are generated, and then going into the code actually how the code is representative of the models, and how effective it is. In this scenario what you see is a search product. Of course, this would be applicable to any one of the sites on their Macy’s JCPenney, Amazon.com, anywhere where you’ll be doing these searches and you would be landing on the shopping page, and all that stuff.
If you look at this model, so this is the model that is generated for an online search, and we are searching a product. Now this is currently it has two-part pieces. One is the initial one where you then do a start. You launch the application, you will land on to the home page. In this case the online the start case would be, I would be putting the URL in the browser and I would be my landing page would be the home page. That’s the first verification that we would be doing.
Now when we go to the home page there are multiple actions that we can perform. I can open the shopping web, even though there is nothing there right now, but eventually we will see that since it goes in a loop that’s the other thing which would be valid after that. Or I can expand the menu item then I cover over the main menu item, or I can search for an item and get the search results.
Again the principles are exactly the same that whatever you see in the arrow is an action, and whatever you see in the blocks or a box that’s the verification or the landing page in case of a website. From a home page either you can go to a shopping back page, you can go to search results, or a menu item expand it. Which would be based on how if you hover over the main landing. If you search for an item then you get to the search results page.
From search results page, again there are multiple options which can happen either you can open the shopping bag. You can open the product which will be clicking on one of the items in the search result that could take you to the product details page. If you apply the filter on the search results, then it could give you the results with the apply filters. For example, I said, “Okay, no worry. Give me all the phone colors. Then as I say, “Okay, only display me the iPhone 7 colors, or the iPhone 8 colors, or iPhone X colors.”
That’s an example of how the results we’ve applied in filters. Again, this is the model which I’m generating when I would run this model through my test case generation tools, what this would do is this would generate my test cases for each of the paths which are being mentioned here. One path can be home page to be shopping back page. The other one can be home page, shopping back page is also the flight filters going to the product details page and going to the all submenu. The all submenu item products.
These are different paths which would be converting into test cases and of course, there would be a different data set. We would be talking about that would be associated with these test cases. Now this whole thing searching. Let me go to the different aspect of this, which if I just quickly go back you see on the top the product details page. The product details page this is where we are, that what I did was I did on to went to a home page.
Let’s say I did the search results click on one of the product details page, and when I got to the product details page, now I have multiple choices. What I do is either I can– again, hover over the main and menu item that’s menu item expanded. As you see the menu item expanded is there on multiple pages. We need to validate that functionality again. If we miss this part as part of the product details page that this would not be tested.
That’s why the importance of the validity of the model, and I was saying before compared to the current system under test is very, very important, because that’s what ensures that whatever you’re testing is correct. Otherwise, your test cases would be generated for that system which is either they would be missing functionality, or they would be testing invalid functionality. Missing functionality would be testing gaps and if it’s invalid functionality what that leads to is invalid failures or incorrect failures which leads to analysis type.
Pretty much sooner we are able to align the models with system tests the better off we are, with these system tests the better of we are, because that ensures that we do not waste unneeded time. We just talked about the models. Next step really from this is that, what is next? What happens is that using these models, these scenarios are automatically generated and all these scenarios like I said before, since they’re following unique paths they would be unique test scenarios.
The validation and correctness of test coverage and test scenarios happens also at that level because we just want to make sure that whatever we are covering using the model is correct, and we are able to execute, and they are functioning well. Again, I think this is something which we have been repeatedly saying that 100% test coverage follows the graph. If the graph follows the requirements correctly we can absolutely achieve that.
Of course, when we talk about this MBT, the identification and defining of test case priority and complexity that also comes at the stage, where once you have generated the test cases what we would do is, we would give all these test cases priorities as P1, P2, P3, and all of us know that. A lot of times we are in the case we do not have enough time for doing the execution. That’s when this priority and the weightage comes into place. Like both of them make a very, very intelligent system where we’re able to execute our test cases depending on the time on hand. The automation execution what that does is that it makes it pretty, pretty intelligent from the execution perspectives. That we pick and choose, let’s say I say that I only want to execute P1s only those will be executed. If I even want to say that there’s a particular set of data only which I want to execute, or which I would be showing in the next couple of slides.
Then you’ll be able to perceive that what we really mean by intelligent where you can pick and choose that you get this set of data or let’s say I’m just validating that I have a bunch of websites for which I’m working on from my current product itself. They’re a bunch of URLs. I only want to validate two URLs out of those 10 or 20 or 30 then it would only pick up the test cases for those two or three URLs and execute them.
Then, of course, this provides us the capability at this step, the test models which are generated we need to make sure that they validate against the requirements, so we do very early using MBT. Because in Agile it’s very important that your tests are following along with what development is going on. In the work of our model the tests you start working on the test cases once the development is done. In this case, that’s not the case because we’re talk about two weeks spans.
We have to validate within those two weeks, and as we know that majority of the chunk of the time in a Sprint is internally consumed by the Dev teams. Testing has to be in parallel with them, in touch with him as the product is being developed just to make sure that things are on the right track. This is the combinatorial data generation. I was talking about earlier. If you understand, if you see this data, in this data what we are talking is we talk a model.
We are talking about parameters, we talk about data values on the left table and we’re talking about the URLs, the assignments and a bunch of other information pertaining to that URL. What would happen is for the dashboard model, for the data value JC Penney, these are the bunch of other additional values I have. If I put a small dashboard if I just run technically say, “Okay, run the P1 for my J score where the value is jcpenney.com.”
Is going to come to the table on the right and it’s going to execute all these test cases which I mentioned here. That’s the one we talked about that using data I can extrapolate for same set of test case. I can execute it in multiple scenarios. Following up on how that gets converted into the actual scenario step definition and test data. This is how would be technically the end-to-end or currently what we were seeing is the model.
If we come here, the scenarios which are created on the Left they are generated using one of the tools which go through the model and they would generate a given when then I open the browser, I navigate to jcpenney.com. I should see that JCPenney homepage, I cover all the main menu, then the main menu item should be expanded. This is the scenario that got created using the model. Now as you see if I go to the right side what I am doing is these are different with the vents and the dents.
Now I get into the step definition, whereas either each step which is defined here I would be defining the step. By the way what we are doing here is we’re using QMetry Automation Framework. You guys can google it on the internet you will be able to find a lot more on that. From the generic concept, this is what happens scenario then we do the same step definition where these scenarios are exploded, and step definition are executed using this test data.
As you saw before we were talking about a scenario now in the example which I gave. So these are the scenarios which we are talking from that data perspective. It could go to this particular scenario, pull out that test data as described in the metadata piece. The data is described there, it will pick up that particular data and do the execution. I can change that at runtime whenever I’m doing the execution just to make sure the execution time, just to make sure that I pick up different data, I pick it up based on the priorities.
The last thing I really want to talk about is the omnichannel testing. As all of us know that one of the key things nowadays is that we are talking about multiple devices, where I would be able to create an account on one, let’s say desktop but when I validate or check my balances I do that on my mobile phone. MBT is very good even for omnichannel testing. We have tried that on the US largest retailers and what we have seen is the Sprint velocity enclosure increased by 25%. There are thousands of acceptance criteria. We are able to even reduce them and make them more understandable by converting into MBTs. Of course, the CICD process we have done that end to end because again that supports the Agile. The new feature ready to market in about four weeks that’s been implemented using this. The test authoring speed is reduced immensely, down by 75%. Because when we authored the test case the biggest thing is that there is no basis. There’s a lot of time every engineer would be spending and understanding the system under test and then implementing.
What the model does is, there’s an initial time that we spend in creating the models to represent the SUT, and then creating test cases is pretty fast. Of course, this helps us introduce in Sprint automation which is majority of the time is a challenge. Because what we the LEOs team in the standard cross-standard companies, and standard processes, the automation generally follows one straight behind, one Sprint behind. The faster we are able to automate if we automate the automation that makes us easier. With that, I’ll pass it on to you, Sarah. Thanks so much.
Host: All right. It looks a question for you. Is the MBT approach suitable for any type of project?
Anna: Not really. I would suggest to use MBT when your product is established and you have something that is stable– When I say stable I mean features of stable. If you have like an R&D project you just discovering what would user want? You need frequent feedback from the user, you need to do AB testing. I would suggest that when you have R&D do not use something heavy as model-based testing. I would even suggest do not use any automation when you have something like R&D.
You know that you need frequent feedback from the user, your features may change, your workflows may change, automation is not suitable for that. For kind of something that is already in production something which is already customer lines on, use something like that. That’s like much better choice for established product, established features and something like regression based I would look for regression-based focus when you use model based automation.
This model is complex in itself. You don’t want to add this complexity to change something today, change something tomorrow, change a workflow again. Then you’re going to be running after the changing features and I don’t think it’s worth it. I would say more established system we have the better it is.
Host: Great. Thanks so much Anna. Dharminder what are some of the disadvantages of model-based testing?
Dharminder: In my experience, some of the disadvantages that I’ve seen with the model-based testing is pretty much the– we need to make sure that the testers or QA engineers have some essential abilities which help them to understand the system and convert them into a good model. Because if the ability is missing then we’ll be coming up in the model which is in lacking, and in representation of the entire system that’s one.
Of course, sometimes if the ability is not there, it’s a little complex. I was also saying to perceive the prototype of the model itself and there’s a lot of iterations. You sometimes go through if you’re not able to really do a good job in understanding the system end to end. Even though MBT does not really support end-to-end or does not ultimately lead to end-to-end testing it does the different flows that are defined as part of the model.
It’s very important that we understand the system end-to-end, so that we can look at all the permutation combinations, all the steps that would be all the flows that we’ll be going through from the validation of the system before we start working on the prototype. Of course, that pretty much leads to another thing which is the knowledge curve, the knowing curve, the all the understanding curve.
That is a little higher in the initial stages, but once it is done then we can all be advantage. We of course get is the time in which we are able to automate the time saving which is there. It gives back in the later stage but initially those are the challenges.
Host: Great. Thank You so much Dharminder. Actually, another question for you as well. Is MBT widely used?
Dharminder: That’s an interesting question. As I see so far, I’m seeing MBT more somehow being used in the European sector. In North America it’s being used but not as widely, so there’s a distinction there. That’s the short answer to that question and why even I sometimes struggle with what’s really happening. Is it because the visualization of the model is there a little more based on the system which are being developed?
Or is it because of the fast-paced environment the stability is not there in the feature. Where people start working on because like Anna was saying earlier, unless the system stabilizes. Reusing the model based testing is not a good idea. When these systems are stabilizing sometimes things pick up different avenues of doing testing, and then probably they’re not switched to MBT that might be reason. Interestingly that’s what I’ve seen that in Europe it’s very widely used, whereas in North America is still catching speed
Host: Thank You Dharminder. I believe we have time for one more question. Anna question for you. Do testers need any special skills for using model-based testing?
Anna: I would say reading the script should be enough. It’s not that you need to know what’s underneath. It’s good that you understand because you’re going to be working with the code more or less if you saw what Dharminder showed you, some of those scripts, they’re scripts right. So you need to understand kind of how to read them. It’s easy to work with this when it’s implemented, but you need to be– I would not say that you have to be a developer, no, but be comfortable with scripting, I would say that.
Host: Great thanks so much Anna. I believe that’s all the questions we were able to answer at this time. Also, for everyone listening, be sure to check out and subscribe to DTV. It’s a new digital transformation channel that brings in industry experts. Many thanks to our speakers and thank you everyone for joining us today. You’ll receive an email in the next 24 hours with a slide presentation, and link to the
webcast replay. If you have any questions please contact us at email@example.com, or call 1-408-727-1100 to speak with a representative. Thank you all. Enjoy the rest of your day.
Anna: Thank you Sarah, thank you Dharminder, thank you everyone.
Host: Thank you both. Okay
Dharminder: Thanks Anna, thanks everybody