Site icon Experience, Digital Engineering and Data & Analytics Solutions by Apexon

The Concurrent Virtual Users Myth with Performance Testing Tools

Testing

So you need to simulate a load of 5000 users and you are struggling to find a tool that will allow you to create a load of 5000 users. You short list a couple performance testing tools and call the sales reps. Turns out they sell it in packs of virtual users. You and your boss QA Manager are in trouble, the tool vendors are quoting a price based on virtual users and you never really budgeted that many dollars for testing. You look at open source tools, you see some promise, but again you are   not sure if the open source tool will scale to generate that kind of load. You are also worried that open source tool of choice will not be supported. Man, that smells bad…

You may not be in as bad a trouble as you think you are in. It is true that licensing models of commercial performance tools are built around virtual user packs. It is also true that good tools that can test thousands of virtual users cost an arm and a leg. But can you simulate a larger user load with a small set of virtual users? The answer is “may be” — though most likely yes! If you could do that, you would buy a handful of concurrent users — like 500.   It is now your job as the developer of the test scripts to make the server believe that it is getting load of 5000 concurrent users using those 500 concurrent users. Possible? Again — “may be”, but if it were, you have just saved your company $$$.   Let us examine that “may be” in detail.

Traditionally, we like to think load simulation as a real user. Rightly so! After all the idea is to create “production like” load scenario of real users. Real users sign in, check their home page, sip coffee, do a few transactions, do water cooler banter, do a few more transactions etc etc. In a nut shell, there is a lot of think time we build in the user load simulation. The virtual user thread (most tools would dedicate a tool thread to a virtual user) as a result is mostly idling in think time and iteration gaps.

The server on the other hand does not care if the load is coming from same virtual user thread or a different. As long as we can have the same virtual user create multiple user sessions and pump the load through them, the server merrily thinks it is serving multiple sessions (and it is). So the trick is to use a virtual user and make it behave like multiple virtual users. Therefore , if you could have the user sign in with multiple ids (and not sign out) from the same vuser thread and push transactions through with less or negligible think time, you will achieve nearly the same result.

I like to think about the load from server’s perspective. I.e. how many transactions/ activities etc. am I expecting to serve per minute? How many sessions am I interested in creating? Can I use very limited vuser threads at the tool end and still create the same impact to the server, in most cases yes!

So what is “may   be” about this? In following cases this may be limiting:

  1. By reducing the think times you are making the client machine work harder, at some point it will start to skew the results. Guess what, most vendors will limit you by vusers and not for distributing the load — so you can add more agents to overcome this issue. Besides, you can get decent no of threads (500 or so easily), on a high power (4GB   or so) client machine on which the tool is installed.
  2. This trick works for generating steady state load, If you want to achieve a situation, where a lot of vusers are waiting for a certain coordinated event between threads  to happen and then trigger some load condition, it may be tricky to do that.

Please talk back and share your experiences.

Exit mobile version