Note: after writing this I realize that this question is perhaps philosophical, but I’m interested in how the industry handles this scenario regardless.
I have recently been working with a code base that is not yet as testable as it should be. I’ve seen some junior developers check in code that breaks use cases, which is usually identified by another developer a few days after the check-in.
Assuming that we can’t achieve code coverage yet, are there technical solutions for holding developers accountable for bad check-ins in addition to a non-technical approach; discussing the commit and having the dev address the issue in a new commit.
7
Two simple measures:
- Code reviews (already mentioned by others) before each check in. You do not need to explain every detail of what you check in, but the reviewer should get an idea of the changes that are being checked in. You can program your RCS to reject check-ins whose check-in text does not contain a tag “REVIEWED BY: …”.
- Agree with the rest of the team that if a check-in breaks any unit tests, the developer who has performed the check-in must fix the code until those unit tests are green again (possible measure: write a bug and assign it to him / her). As an alternative: if a team member detects a check-in that breaks the unit tests, he or she can revert the changes. The author of the wrong check-in is not allowed to check-in again as long as the problem is not fixed locally (unit tests are green locally).
NOTE
Point 2 has been edited taking into account gnat’s comment.
5
If you use git for version control check out gerrit. Sure it can be a pain in the ass, but it does require code reviews to take place before a commit is merged into the repository.
4
You need to implement a Continuous Integration (CI) system like TeamCity. It should not be some developer a few days later noticing the problem. With a CI system as soon as a check-in occurs the code is automatically built and the automated tests are executed. Within minutes of the check-in the CI system will alert everyone that the build has been broken and who did the check-in that caused it.
This way it is all completely objective and there is no finger pointing about who is at fault. Unlike developers the CI system is using only that latest code from source control to do its testing. There are no local changes and no funky configuration differences. Most systems have a tray icon that turns red when the build is broken. All developers should have that icon installed and keep track if the build is OK or broken.
1
I think your team needs to attend an agile scrum course and make a pact to follow the methodology.
When developers check-in small code changes assigned to small tasks it reduces the chance of developers going off the rails and checking-in conflicting code. This increases continuous integration.
“I’ve seen some junior developers check in code that breaks use cases,
which is usually identified by another developer a few days after the
check-in.”
Even in a perfect world this still happens. If you find its happening frequently and/or after several days of development then you have a problem.
I think you’ll find the team will work together more effectively with a planned iteration. I suggest you try out a 2 week sprint. Start the sprint with an in-depth scrum meeting. Go through the backlog of Bugs and Work Items, accepting the top priorities and writing them down on post-it notes with units/time estimates using planning poker cards.
Line up a scrum dashboard or white board and ensure people only work on one thing at a time. Once they have finished the task they need to write a unit test for it. Once they write a working unit test they will need to get a Code Review. Constructive code reviews is one of the best techniques to prevent developers accidentally checking-in code conflicts.
After the Code Reviewer has signed off, the developer does the Check-In Dance:
- Let the rest of the team know a change is coming if it’s a significant update.
- Get the latest code from source control.
- Do a merge on any conflicts.
- Run the build locally and fix any problems found.
- Run the Unit Tests locally and fix any problems found.
- Commit the changes to source control.
- Stop coding until the build passes.
If the build breaks, drop everything else and fix the build.
Team Foundation Server (TFS) faciliates all this stuff out-of-the-box.
You should fix up the tests before moving forward. But just because 80% of code is unit tested that doesn’t mean its bug proof. Developers have to be active in testing their code sociability. This code integration is much easier to achieve using non-waterfall methodologies.
I second code reviews. Some tools let you do reviews before the code is committed like Github pull requests. Others let you see who touched each line like SVN blame.
Personally, I’d rather my team members commit code to the trunk frequently even if it isn’t perfect. It increases the chances someone else will notice an issue and allows for the benefits of continuous integration.
We do formal code reviews at the end of a task. I also do “random” code reviews as people work and commit. The random is in quotes, because I review more heavily at the beginning to train people well how to test. I’ll also offer to pair to test so developers gain the skills. The random is also in quotes because people who write worse code/tests get chosen more often for the random reviews. I keep the random reviews anonymous – management never hears about them – they are a purely team practice.
Either of the following should reduce the risk of a junior committing bad code drastically :
-
Pair programming, preferably with a senior developer as a navigator
-
Pre-commit code reviews by a senior developer
A nice thing about these is that they are collaboration-oriented, not blame-oriented, which IMO is definitely something you should favor when integrating junior programmers.