I’m in the process of setting up a build server for personal projects. This server will handle all the normal CI stuff, including running large suites of tests (unit, integration, automated UI). While I’m working out the kinks for including code coverage output with MSTest, it occurs to me that there may be lots of tools out there which give me additional metrics other than just code coverage.
FxCop comes to mind as an example. Though I’m sure there are others. Anything that can generate useful reportable data and metrics would be good. Whether it’s class dependency charts (looking for Law of Demeter violations, for example), analyses of the uses of classes/functions (looking for a function that isn’t used in the system other than just the tests, for example), and so on.
I’m not sure the right way to formulate the question, since polling questions or “What’s your favorite code analysis tool” aren’t very good. But I’m essentially just looking for recommendations on what metrics to gather and the tools that can gather them.
The eventual vision for something like this is to have the CI server run a bunch of automated tests and analysis tools and track performance metrics over time. Imagine a dashboard full of graphs plotting these metrics over time. The lines should all relatively be at an equilibrium, and if one starts to stray toward the negative then it’s an early indication of problems with the code. In the age old struggle to quantify code quality with management, this sounds like a potentially helpful means of doing just that.
1
My quick suggestions for tools based on what we have running on our CI server that builds our .NET projects.
- NCover – code coverage of your unit tests
- Microsoft Visual Studio Code Metrics PowerTool
- Maintainability Index
- Cyclomatic Complexity
- Depth of Inheritance
- Class Coupling
- Lines Of Code (LOC)
- Jenkins Task Scanner Plugin – high / medium / low task counts
- StatSVN (or similar for your particular SCM)
- code growth and churn
- who’s committing where
- A Static Analysis tool (Klocwork, Coverity) – look for SPMs (Silly Programmer Mistakes)
These are the metrics we pay attention to on a daily basis
- Unit Test Results
- Code Coverage – we want our unit testing to cover 100% of the code
- Open Tasks – this helps us track how much work is still left
- Code Churn – lots of churn means it’s not stable yet and helps predict end-dates
- Code Size – if there is a lot of code being added, it’s not done yet.
- Static Analysis results
The other metrics are used more as indicators of “improvement opportunities” in the code.
For completeness, here are some other tools that have been mentioned. (We don’t currently use them, but they are worth the mention.)
- nDepend
1