I have been browsing through some webpages related to testing and found one dealing with the metrics of testing. It says:
The severity level of a defect indicates the potential business impact
for the end user (business impact = effect on the end user x frequency
of occurrence).
I do not think think this is correct or what am I missing? Usually it is the priority which is the result of such a calculation (severe bug that occurs rarely is still severe but does not have to be fixed immediately).
Also from this description, what is the difference between the effect on the end user and business impact?
3
Test Metrics and Priorities
Many metrics help understand impact of defects missed in testing when they fail in the field.
One common technique in risk management is to rank the top 10 risks on a project based on the following formula:
risk exposure = cost of failure * probability of failure
I think this works because the units can be dollars and ROI for corrective action may also be estimated using dollars.
Mindlance lists many metrics for their tools. The one you suspect, reminded me of the risk ranking equation. But it describes business impact as a priority times a frequency. Elsewhere, the priority seems to be enumerated 1 to 4. Their equation is qualitative but not good math for quantitative methods.
When they mention frequency of failure, a useful testing technique comes to mind. The article Operational Profiles asserts that we should test most those features the customer uses most. It demonstrates simple steps to build a profile by observing usage, uses tallies to rank most frequently used features highest.
That document doesn’t seem to mention priority anywhere. I assume they’re just assigning a severity level to each defect. It should be understood that a higher severity leads to higher priority. Since the document is only considering defects I think that’s appropriate. You might have other new features that are higher priority, but those can be considered separate from defects.