I am about to participate in a discussion with management regarding measuring our testing efficiency as a QA organization. The main reason behind this is that half of our team is contracted out and our business would like to provide some metrics of how effective/efficient we are, so that we have basis data on which to negotiate contract parameters with the service agreement of our contractors.
I have poked around a little and most of the opinion I have found on this subject revolves around developer efficiency: lines of code, story points delivered, defects introduced, etc.
But what about testers? Our testing is mostly requirements based, and a mix of manual, semi-automated, and automated testing (not because we haven’t gotten around to automating everything, but because some things are not automatable in our test system).
2
Number of test written is useless, and a high number of bugs found can be a measure of poor development rather than efficient QA.
Automation measures (code coverage, feature coverage…) can be good, but I think they’re a more help to development (as a developer, will I know if I break something accidentally) than customers (i want to do that and it doesn’t work).
Since quality is good if customers don’t encounter problems, so a good measure of the effectiveness (not the efficiency) of a QA team and process is the measure of bugs found by customers that haven’t been found by QA.
The main problem with that metric is that there can be a considerable delay between the work done and when you start having meaningful numbers.
2
There are a few metrics that we used at my last job to evaluate QA:
- Number of bugs found. I hate this one. It’s like “Number of lines of code written” for a developer.
- Number of automated test cases produced.
- Percentage of total application covered in functional testing.
- Number of bugs found in staging vs production.
In the end, your QA team’s job is to find the bugs before they get out in the wild. Their metrics should be based on actually achieving that goal. If there is a low coverage of test cases, minimal amount of automated tests, and a high rate of bugs in production, then they aren’t doing a good job. However, if they have a good track record of finding the bugs long before they hit prod, their metrics should be pretty high.
3
QA should be measured by two main metrics: how many bugs get past QA to be found in the field? What are their severity?
You might be able to ding QA for finding severe bugs closer to release than dev-complete. You might be able to ding QA for not completing testing by their estimated completion date (per feature).
Though in the end, I fear you’ll spend more money trying to measure the effectiveness of your contracting staff than savings gained by using a contracting staff…
The company I work uses a number of QA metrics.
The one that I feel is most relevant is code coverage. A tool like EMMA works great as they write thorough automated tests in addition to their manual tests.
Whatever you do, do not focus on number of tests.
That’s about as useful as LOC per day.
Many ways to measure performance in development and testing phases during project execution. We used below measures in our projects.
Development performance measured by 4 popular Code metrics(Maintainability index, Cyclometric complexity, Depth of inheritance, Class couplings). For C# will get it in Microsoft Visual Studio.
For test coverage Ncover/Ndepend is very useful.
Testing performance measured by no of development bugs -rolling over by last 4 sprints
System testing bugs rolling over last 4 sprints.
No of automation test passed in particular release/Features delivered.