A google scholar search turns up numerous papers on testability, including models for computing testability, recommendations for how ones code can be more testable, etc.
They all come with the assertion that more testable code is more stable, however I can’t find any studies which actually demonstrate this.
I tried looking for studies evaluating the effect of testable code vs. quality, however the closest I can find is Improving the Testability of Object Oriented Systems, which discusses the relationship between design flaws and testability.
Is testable code is actually more stable? And more importantly, how strong is this relationship?
Please back up your answers with references or evidence to back up your claim.
For example, there is a lot of study regarding the relationship of cyclomatic complexity and defect rate. Troster finds a correlation of r = .48
There are metrics for “testability”, such as code coupling. I’m looking for research conclusions relating these to defect rate. Ideally I would love a graph plotting some measure of testability vs. defect rate.
11
In the revised 1995 edition of “The Mythical Man-Month”, by Fredericks Brooks, JR., Brooks says that in the following book:
Jones, C., Assessment and Control of Software Risks. Engle-
wood Cliffs, N.J.: Prentice-Hall, 1994. p. 619.
… Casper Jones “offers data that show a strong correlation between lack of systematic quality controls and schedule disasters.”
Maybe in that Jones’ book you will find the research conclusions you are looking for since “schedules disasters” are in part, Brooks dixit, caused by lack of productivity influenced by an inefficient proccess of defect detection and correction.
Hope it helps.