What are the categories of cyclomatic complexity? For example:
1-5: easy to maintain
6-10: difficult
11-15: very difficult
20+: approaching impossible
For years now, I’ve gone with the assumption that 10 was the limit. And anything beyond that is bad. I’m analyzing a solution, and I’m trying to make a determination of the quality of the code. Certainly cyclomatic complexity isn’t the only measurement, but it can help. There are methods with a cyclomatic complexity of 200+. I know that’s terrible, but I’m curious to know about the lower ranges, like in my example above.
I found this:
The aforementioned reference values from Carnegie Mellon define four
rough ranges for cyclomatic complexity values:
- methods between 1 and 10 are considered simple and easy to understand
- values between 10 and 20 indicate more complex code, which may still be comprehensible; however testing becomes more difficult due to the greater number of possible branches the code can take
- values of 20 and above are typical of code with a very large number of potential execution paths and can only be fully grasped and tested with great difficulty and effort
- methods going even higher, e.g. > 50, are certainly unmaintainable
When running code metrics for a solution, the results show green for anything below 25. I disagree with this, but I was hoping to get other input.
Is there a generally accepted range list for cyclomatic complexity?
4
I suppose it depends on the capabilities of your programming staff, and in no small part on your sensibilities as a manager.
Some programmers are staunch advocates of TDD, and will not write any code without writing a unit test first. Other programmers are perfectly capable of creating perfectly good, bug free programs without writing a single unit test. The level of cyclomatic complexity that each group can tolerate is almost certainly going to vary substantially.
It’s a subjective metric; evaluate the setting on your Code Metrics solution, and adjust it to a sweet spot that you feel comfortable with that gives you sensible results.
10
There are no predefined categories and no categorization would be possible for several reasons:
-
Some refactoring techniques just move the complexity from one point to another (not from your code to the framework or a well-tested external library, but from one location to another of the codebase). It helps reducing cyclomatic complexity and helps convincing your boss (or any person who loves presentations with constantly increasing graphics) that you spent your time making something great, but the code stays as bad as it was previously.
-
At the opposite, sometimes, when you refactor a project by applying some design and programming patterns, cyclomatic complexity can become worse, while the refactored code is expected to be clear: developers know programming patterns (at least they are expected to know them), so it simplifies the code for them, but cyclomatic complexity doesn’t take this in account.
-
Some other non-refactoring techniques don’t affect the cyclomatic complexity at all, while severely decreasing the complexity of a code for developers. Examples: adding relevant comments or documentation. “Modernizing” the code by using syntactic sugar.
-
There are simply cases where cyclomatic complexity is irrelevant. I like the example given by whatsisname in his comment: some large
switch
statements can be extremely clear and rewriting them in a more OOPy way would not be very useful (and would complicate the understanding of the code by beginners). At the same time, those statements are a disaster, cyclomatic complexity-wise. -
As Robert Harvey already said above, it depends on the team itself.
In practice, I’ve seen source code which had good cyclomatic complexity, but which was terrible. At the same time, I’ve seen code with high cyclomatic complexity, but I hadn’t too much pain understanding it.
It’s just that there is no and couldn’t be any tool which would indicate, flawlessly, how good or bad is a given piece of code or how easy is it to maintain. As you can’t program an application which will tell that a given painting is a masterpiece, and that another one should be thrown away, because it has no artistic value.
There are metrics which are broken by design (like LOC or the number of comments per file), and there are metrics which can give some raw hints (like the number of bugs or the cyclomatic complexity). In all cases, those are just hints, and should be used with caution.
1