When I’m coding something I know that there are many ways to see if my code is good or not. First is testing: I can do unit tests or even test the software by myself and see that it works or not. After getting it working, I can analyze the coupling of the code and so on to refactor and make it better. In that sense, I’m confident when coding because I have a way to know if what I did works or not and ways to think it better if it doesn’t work.
When designing the software though I’m mainly having difficulties to analyze whether the uses cases I write are good or not. I’ve read lots of theory on use cases, but the practice is being a little hard because I always find myself questioning things like: “should this be included? is this necessary to say here? isn’t it missing something?” and all sorts of things like that.
So, how can I “test” my use cases? How do I know if they are well written or not? I know in OOAD iterative approach, we don’t try to get it right on the first iteration, however it should at least contain the exact information to get me started in coding, and I don’t now how to discover if it does contain that information.
I would have QA test them.
Seriously: if QA is able to read a use case and understand the program flow without a priori knowledge of the code (which tends to make a developer biased in terms of testing anything) then it is likely at least decent quality.
When it comes to validating requirements, use cases, and other documentation that typically comes near the start of a project you are better off involving as many stakeholders as possible in the process. If the group decides that an artifact is good quality and that each team member can get value out of it, that is quality.
For example: the customer agrees the use case accurately reflects what they want. QA agrees they can test it. Development agrees they can code it. And everyone agrees it is specific enough to remove ambiguity without unduly restricting the implementation. That is one measure of quality.