Chidamber & Kemerer proposed several metrics for object oriented code. Among them, depth of inheritance tree, weighted number of methods, number of member functions, number of children, and coupling between objects. Using a base of code, they tried to correlated these metrics to the defect density and maintenance effort using covariant analysis.
Are these metrics actionable in projects? Perhaps they can guide refactoring. For example weighted number of methods might show which God classes needed to be broken into more cohesive classes that address a single concern.
Is there approach superseded by a better method, and is there a tool that can identify problem code, particularly in moderately large project being handed off to a new developer or team?
4
This is a tough question, and there is probably no “good” answer for it. The excellent comments posted by Daniel B. and Yannis Rizos are sound, and I’d argue further that the best metrics are those that you understand, along with their causes and consequences.
A recommended lecture for this would be the Goal-Question-Metric paradigm, by V. Basili [1], further described by L. Westfall [2]. Once you define your own need, then questions, then metrics, and if the CK metrics give you insights on this reduced set, then go for it.
To get back to the initial question (sic;), yes they are still used, even under the wood, as pointed out by Yannis. And for their meaning (complexity, maintainability) I find them relevant — this is clearly an opinion of mine.
As a side note, the CK metrics are first defined in [3] before being challenged by Subramanyam in [4].
[1] V. R. Basili, G. Caldiera, and H. D. Rombach, “The goal question metric approach,” Encyclopedia of Software Engineering, vol. 2. Wiley, pp. 528–532, 1994.
[2] L. Westfall and C. Road, “12 Steps to Useful Software Metrics[1],” Proceedings of the Seventeenth Annual Pacific Northwest Software Quality Conference, vol. 57 Suppl 1, no. May 2006, pp. S40–3, 2005.
[3] Shyam R. Chidamber and Chris F. Kemerer, “A Metrics Suite for Object Oriented Design[2],” vol. 315, no. December. 1993.
[4] R. Subramanyam and M. S. Krishnan, “Empirical Analysis of CK Metrics for Object-Oriented Design Complexity: Implications for Software Defects[3],” vol. 29, no. 4, pp. 297–310, 2003.
1
There are quite a lot of static analysis tools that extract code metrics, and pretty much all of them will include some or all of the CK metrics. Most of the tools are language-specific; sonar is the only one I know that is not.
As for the usefulness: I’d say that guiding refactoring and identifying potentially problematic code is exactly what they are good for. But they should most definitely not be misused as a goal unto themselves.
1
Is there an open source tool to help?
Metrics Reloaded is a plug-in for InetelliJ IDEA that seems to do a good job of providing six CK metrics. A simple sample is shown here: