I’ve convinced our organization to move to a full-team definition of “Done”, i.e. one that includes the QA testing and not just code completion. As a result, we can now more accurately tell where the bottlenecks are and see what our real velocity is. As well, this project continuously integrates and deploys regularly. Prior to this, code-complete items were considered done, and QA would routinely be getting around to accepting stories from 3-5 sprints ago. They would only get the delivered code at the end of the sprint.
So, as you can expect, things are running much more smoothly, but I have a question that hasn’t quite been answered before in my search. Even with sprinting, there seems to be a max time in the sprint that a story can be finished by developers and deployed before there isn’t sufficient time left for QA. Depending on the story and volume of things that still need testing, this seems to be about 2-3 days before sprint end.
Currently, we just continue working on things and just move all items not accepted by QA (i.e. things in progress and things deployed but not tested) to the next sprint if they are unable to complete testing. Is this an OK practice? Some devs that were used to working under the old system complain that this doesn’t give the developers credit for completing the work that they were assigned, but I feel that the idea is to get a picture of what the whole team did, not just development. The end-user gets no value from things not tested and deployed. However, this process is having a tendency to pre-load the next sprint, which makes planning slightly more problematic.
Additionally, I have heard that having an end of sprint developer lull is actually OK. Should we be instead looking to leave some open developer time at the end?
2
It is your team’s responsibility — the whole team — to make sure everything is done. If the devs are finished coding and there are still tests to write or perform, they should pitch in and help the testers.
There shouldn’t be any “open developer time” — why penalize your testers by giving the developers free time? The developers need to help with the testing. It will make your product better, it will make your developers into better developers, and it will make your testers feel more valued.
The developers should not be “pre-loading” the next sprint until all of the work in the current sprint is done. There are exceptions of course, but in general they shouldn’t be starting any new work until all of the existing work is done.
Some devs that were used to working under the old system complain that
this doesn’t give the developers credit for completing the work that
they were assigned
This is good, because developers aren’t supposed to get any credit at all. The team gets credit, and by not giving the devs credit, the whole team is made stronger.
2
Some devs that were used to working under the old system complain that this doesn’t give the developers credit for completing the work that they were assigned
Did they really complete the work in time when there is not enough time left in the sprint for completing the test cycle? Maybe, maybe QA needed more time than expected, who knows, but the customer is not interested in who in your team is to blame.
One of the core ideas of agile is to adjust your planning with every sprint. If your VCS and branching model gives you the ability of simply moving feature implementations from one sprint to the next, then use that – not delivering untested code will surely give you a better product, your customers will value that. Just make sure you do not pollute your released product with “developed, but untested” code – such code is always unfinished.
And – as you already noted in your question – you have to make sure you adapt your planning for the next sprint. When your devs already provide a newly implemented feature at the first hour of the new sprint (because it was developed in fact during the previous sprint), then your QA people will need more time for testing in the current sprint, or less additional features to test. If that happens regularly, and your devs produce constantly more output than QA can test, you have to change something in your process or your organization (maybe you need more QA people, maybe you should assign your devs more tasks in the area of test automation).