When a bug is fixed, the dev set its status to “resolved” and the bug is reassigned back to the person that created it. In our case this is usually the product owner – we don’t have dedicated testers.
But what’s a good process for controlling how/when the PO tests the software? Should he be given the latest build after each bug is resolved/checked-in? Or what about every morning? Or should he only receive a build at (or close to) the end of the iteration, to include all of that iteration’s new functionality and bug fixes?
We are using TFS by the way.
In most agile environments that I’ve seen, a continuous integration environment exists. This produces new builds and executes a test suite on that build on a regular basis – after every check-in to the integration branch of the repository, every night, or every week. It depends on how long and complex the build/test cycles are – it may not be feasible to do it extremely frequently.
In terms of when a build should go to a person for user acceptance testing, after every bug fix seems a bit too much. You should have automated unit and system tests that are ensuring that your builds are fairly stable and to find any key issues. The exact time to carry out user acceptance testing depends on the number of bugs fixed and the length of your iteration. The earlier you get feedback on the work, the better, so that it’s possible to resolve any issues with the fixes or new features.
My recommendation would be to make a build available for user acceptance testing as frequently as possible (preferably every morning). This build should have already undergone developer testing and automated testing. The user acceptance testing should happen as often as schedule permits, but it may not happen daily. You need to note the risks of waiting too late for user acceptance testing – feedback on new development or fixes may have to be put off until the next iteration.
2
It seems to me that you are mixing up two kinds of testing.
Functional Testing is when you check for regressions or that a bug is fixed. This can be done via automated tests or a manual test by a developer. E.g. “does functionality X work?”
Acceptance Testing is when the PO verifies that a story or a unit of work implements what was asked. This can also be automated but it’s semantically separate from the previous. E.g. “does the software now allow me to achieve Y?”
A lot of people seem to miss the difference between the two, however in my experience there’s a huge gap and mastering what these two forms of testing are is fundamental to a successful project.
POs don’t test software. They perform a cursory check to make sure that the product being developed is going in the correct direction. This means that there has to be someone else making sure the software is tested. This role, though, should in turn completely refrain from judging where the product is going in terms of features.
Projects that mix up the two things tend to fail:
- QAs will become unelected mini-POs and fail at that
- POs won’t test your software and if they do, they’ll lose the ability to judge from a distance.
You may wish to consider having a separate QA role for this. QA can dedicate themselves to following up and making sure new functions work, bugs are fixed as expected and regression suites still show everything else working.
Disclosure: I am in a QA role that I basically created for myself to address these issues.