I am using JMeter to test my MVC 4 ASP.Net web application performance for high user load. I have recorded a 10 minute script of a basic use case for my application and am re-running this script continuously to simulate activity from 400 hundred users. JMeter is set up to ramp the load gradually, adding one user every 5 seconds and when it reaches 400, it just cycles these 10 minute sessions on each one.
During the first stage of the test, when there are < 200 simulated users, JMeter is showing normal, expected latency with all requests except the login procedure responding in under 500ms (the login procedure takes 3000 to 5000ms).
Throughout the test, this latency increases for all requests for the MVC result actions, but not requests for static resources such as script files or imagess.
Between 200 and 300 users the latency climbs to 4000+ms across the board (longer for login)
300 to 400 – latency jumps between 3000ms and 20,000+ ms.
The increase is not a constant rate – it fluctuates up and down. On the webserver CPUs go up 55% average and stay there. Plenty of free memory too.
I have put in internal logging to try and diagnose the cause – on every action I measure the time when that action starts executing and how long it takes before it returns a result. This time at the beginning of the test coincides with JMeter -very low values, mostly just a few ms, generally everything is under 500ms.
As the test progresses, those internal measurements also gradually grow, but not nearly as much as the ‘external’ measurement from JMeter and also from the IIS logs themselves. So toward the final stages of the test, an action that would take ~100ms at start, is now taking ~600ms. Again this is according to my .net code internal measurements. Meanwhile IIS and JMeter are saying that action took 24,000 ms.
I have looked for possible reasons for this and one that I found is that maybe .Net is running out of managed threads. On the web server I have modified machine.config file for my .Net version thusly
<system.web>
<processModel autoConfig="false"
maxWorkerThreads="400"
minWorkerThreads="200"
maxIoThreads="400"
minIoThreads="200"
/>
I know I modified the right one because when I mis-spell any of those, the application breaks. Subsequent tests have not shown that these changes made any difference.
During the test, as the number of simulated users grow, there are more pending requests showing up in IIS. They peter out at <15. So during this high latency, things look like this:
I am out of ideas on why this is happening or how to fix it. The webserver does not seem to be overloaded, internally my application is telling me its running slower than normally but within acceptable parameters, meanwhile in actuality I have to wait 20+ seconds on every action. And if I leave the test running, the latency keeps fluctuating – so one minute I get everything working within 2-3 seconds, next minute I am back to waiting 20 seconds on every click. Does not seem to be network either because during these high latency periods, stuff like images is still coming in just a few milliseconds.
Any advice would be much appreciated.