As a member of our company’s QA team, I frequently get entirely unenthusiastic feedback from developers in their responses to test results in our agile, web-based software-as-a-service shop. Most of our testing is manual, since automated testing doesn’t really make sense for us right now, and developers are usually reluctant to listen to any change suggestions beyond those that prevent javascript/500 errors. I understand that fixes/changes require work, and our developers are rarely short on work to do, but I don’t think developers respect QAs input.
Unfortunately, our product owners are vacant: acceptance testing doesn’t exist, and user stories usually are only one sentence long, and don’t provide the developer with much to go off of. There is no other feedback mechanism to development other than from customers x-weeks later, who aren’t designers/developers either, of course, and whose suggestions are all over the board.
I am technically competent, at worse, and am capable of simple development on our LAMP stack and feel confident that developers respect my knowledge. However, I have for the most part given up on feedback beyond that which prevents critical errors–which affect data integrity, or bottom-line functionality.
This has raised the question of whether seniority, or pay grade is a significant factor in how seriously developers value QAs input. In our case, where we don’t do automated testing and QA members likely don’t have as much technical expertise, it kind of makes sense that we make less than developers (between 60-70%, depending on time in grade). I don’t believe in the argument that the opinion of the team member with the biggest pay check is the most important, however I can imagine how it’s difficult to take feedback from team members who have a year or two less experience, are not as technically knowledgeable, and make noticeably less. In the end the best idea should win, but unfortunately that might be decided after the enhancement has been on production for several months, and users love it or hate it.
13
It sounds to me like you have a dysfunctional team with a cowboy culture and you’re trying to figure out what the root cause is. You are proposing a hypothesis that maybe developers don’t respect test because of some sort of implicit hierarchy or length of service or some other factor, but you’re not necessarily presenting evidence for the case, you’re essentially asking “could this be what’s wrong?”
In fact, it sounds like many things are wrong with the organization, and any issues driven by perceptions of status or power are merely symptomatic of poor leadership. You are not an agile shop if you are not practicing any of the enabling mechanisms of agile development. One line stories are not agile; those are mere bullet points on wishlists. A story contains a business motivation, a description of the customer’s interaction with the product, and a definition of when the story is done. If you don’t have those three things, you don’t have enough information to decide what should happen or how you know you’ve done it right, so the story will never be “done”. That’s treading water, not making progress. Developers will never be short of work to do in such organizations, because they’ll constantly be firefighting, aided only by tiny buckets of their own urine.
Some of the “definition of done” can be part of a general team agreement, but specific acceptance criteria for any story, even if terse, are essential.
There are very few cases in which “automated testing doesn’t really make sense for us right now”. It may be the case that the test team isn’t the right organizational locus to deliver automated testing, especially early on, but it always makes sense to have automated testing. While it’s ok in my book for developers to do a little bit of exploratory coding without formal automated tests (I’m not a TDD or even BDD purist), it seems horrifying to me as a developer that I’d consider releasing code to a test organization with no developer-written automated tests. Unit Tests and BDD tests, written by developers, and scenarios preferably written by Product Owners, are essential parts of agile delivery.
Figuring out the best use of a test organization in an agile team is a tricky problem for which there is no single formula for success. In an organization which has no definition of done, it will be highly difficult to demonstrate value, because there’s no way of knowing if the test team has contributed to “done.” I’ve worked in old school waterfall teams as well as agile teams with 1) no distinct test organization or 2) moderately integrated test teams, 3) partially integrated test teams with separate stories and work product, and 4) gated release models, where some QA involvement happened alongside ordinary development but there was a distinct “test pass” due to some legacy or regulatory reason.
The “right” model for the test team will actually depend on the level of technical sophistication of the test team members. I think having test team members with moderate or better technical sophistication pair with a developer while writing code, to suggest cases for automation, can be a great model. But a test team can be reasonably effective in validating that the stories have measurable acceptance criteria, doing some exploratory testing as developers check in code, and trying to augment developer unit testing with integration scenarios and fleshing out special cases. It’s even sort of ok to have a throw-the-build-over-the wall approach in some circumstances, as long as there’s a way of converting stories into test cases and there’s some sort of feedback loop with the Product Owners and Developers.
But you won’t really get there without active buyoff from your management and product owners on what the organizational priorities are and what test’s role should be. I doubt there’ve been any serious conversations in your team other than “oh, I’ve worked on other software projects and that means I know we need to have some sort of test effort. Let’s hire a test team.” Most average and some above average developers will be tolerant of organizational inertia that doesn’t demand they engage with the test team. In order for real progress to be made, some management or consensus driven initiative to drive “better” development practices needs to happen.
As a developer and as a former STE, STE Lead and SDET, I have nearly zero interest in how senior the test team members are or how much they are paid. What I care about is how they can help me ship better software. I personally like leveraging the skills of people who can work through tons of scenarios that I can’t meaningfully explore given the team’s desired organizational velocity; I’d be happy to walk through a test team member on how to start from existing unit tests or scenarios and build better coverage, or read test plans and provide feedback. But I might settle for focusing on “just good enough” coverage on my end and let the product owners and maybe just hope that the testers catch what I miss, if that’s all the organization appears to value.
Somehow, either you are going to need to start selling to your management or to your most sympathetic developers on taking a more, dare I say it, agile approach to development and quality. I can’t give you a formula for this, because I’ve not been that great at driving such things in organizations resistant to change, but the best you can hope for is a business value driven case (talking to the business side) or perhaps a craftsmanship/continuous improvement case on the technical side.
1
As a member of our company’s QA team, I frequently get entirely unenthusiastic feedback from developers in their responses to test results in our agile, web-based software-as-a-service shop.
That’s because:
Our product owners are vacant: acceptance testing doesn’t exist, and user stories usually are only one sentence long, and don’t provide the developer with much to go off of. There is no other feedback mechanism to development other than from customers x-weeks later, who aren’t designers/developers either, of course, and whose suggestions are all over the board.
QA feedback only has teeth when you have clear, measurable, actionable requirements that drive both the development and testing efforts, and developers that care about producing a quality product which meets those requirements. You must have requirements that allow you to declare success. You can’t constantly move the goal posts and expect developers to be enthusiastic about scoring touchdowns.
See Also
Characteristics of Good Requirements
My instinct is that the issue here is really a combination of your expectations and the respect shown to you by the team management.
QA’s main goals are to document a combination of product deviations from the spec, usability issues and performance issues. The key word is document.
The product manager role is responsible for deciding priorities on where developers spend their effort, and it may mean that at the current moment they have decided that the priority is to implement features, and to address defects later (The infinite bug strategy). Whilst this is known to be a poor long term strategy, there are often situations where short term it is the politically correct thing to do. (e.g show progress to a customer at next weeks meeting).
As long as you are documenting the defects, you are doing your job, and the product manager has to justify making the decision to ship with known defects.
That said, the secondary issue is about the respect shown to you by the team managers. If they are seeing you as a problem, because you keep finding bugs, and all they want to do is to get the product out, earning money, then the solution is the same. Keep identifying the bugs, but respect that the choices (and then the blame) are the managers’ in the decisions to ship /apply development resources, and your role is to provide them accurate and timely information to make decisions.
If my instincts are right, I’d start looking for a more successful organisation to work for, because this project is likely to end badly.
3
Is seniority/paygrade an important factor for effective QA members?
In short: No, it is not. The effective QA member is the one who provides Quality Assurance of deliverable (for ex: module functionality) by verifying them against the requirements. They are valuable team members who spot out requirement gaps, wrong implementation of business rules in the application and in the database system.
As a developer, we do respect and value a well versed QA, who has good understanding of software development process, open to lean/understand new things, and posses scripting skills to check correctness of data entries in the database with good humor and attitude toward the team members.