Basically, we have three main projects, two of them are web services, and the other is a web application. While I’m satisfied with covering as much as we can of our web services with functional tests (all three projects have their proper unit tests), functional tests for the web application are taking a lot of developer time to get implemented. By a lot I mean two times, or sometimes more, the time that takes to implement the functionality being tested with unit test included.
The manager policy is to test every single functionality we add, even if is not business critical (i.e a new C.R.U.D).
I do agree with testing all of the web services functionality, because is hard to manually test them, and also, this tests run fast and don’t take to much to implement.
So, what’s the value in spending more time writing functional test, than writing system code, unit test and fixing QA tikets? Is this normal? Shouldn’t we be writing functional tests only for critical functionality and let QA to do regression tests over no critical functionality?
Note: we are not developing medical software or NASA software or nothing that critical.
4
Functional tests are very important. Yes, they take time to write but if you are writing the right functional tests, they will be more than worth it.
There are a few good reasons to do automated functional tests on an application.
- When a new feature is added to your web site, it let’s you know right away if changes made for that new feature break any other functionality on your site.
- It’s documented knowledge of how the application runs and works together to achieve the business requirements.
- When it’s time to update a 3rd party library, you can update it and run your functional test suite to see if anything breaks. Instead of having to go through every page yourself, you can have a computer do it for you and give you a list of all the tests that broke.
- Load testing! You can simulate thousands of simultaneous users all hitting your site at once and you can see where your site slows down and buckles under the pressure. You can see how your web site behaves long before you get a late night call that the site has crashed.
- Functional testing takes time to do manually. Yes, it takes long to write the cases, but if you had to sit down with a binder with 500 pages of tests that you had to complete before you could ship the product you’d wish you had the automated tests!
- Testing documents get out of date fast. When a new feature is added, you have to make sure to update the master testing document. If someone skips some tests you all of a
sudden get bugs creeping into pages that are “done and tested”. I
currently work in an environment like that, and I can assure you,
it’s a nightmare.
In the end, yes it takes time to write these cases, but you should take pride in writing them. It’s your way of proving, beyond a shadow of a doubt that your code works and it works with all the other features out there. When QA comes to you and says there is a bug, you fix it, and then add it to your test suite to show that it’s fixed and make sure it never happens again.
It is your safety net. When someone goes in and hijacks a stored proc and makes a small change so it’ll work with their code, you’ll catch that it has broken 3 other features in the process. You’ll catch it that night and not the night before the deadline!
As for writing functional tests for only system critical functions. That won’t give you the whole picture and it’ll allow bugs to sneak through. All it take is for one little feature to be added that isn’t system critical, but interacts indirectly with a system critical function and you have the potential to have a bug introduced.
5
More than 2 times … seems a bit much to me. You might want to analyze the reasons for this, they could include:
-
bad tool support for creation and maintenance of the tests
-
contracts of the web services is not sufficiently described in the design. Developers need to work out the contracts while testing, which is usually a time consuming alignment process.
Talk to your developers.
Assuming you are developing in sprints, having these functional tests if just part of the sprint. It ain’t done without these tests. If you don’t have it, your time for integration testing after the development phase might double.
2
Is spending more time implementing functional test than implementing the system itself normal?
Absolutely. Writing really good tests is likely to take the majority of the time in many (good) shops.
So a 2-1 ratio is fine. Less experienced developers themselves often don’t take all the time for tests into account.
2
There is the law of diminishing returns. Assuming you write tests for the riskiest code first, the value generated by further tests diminishes over time.
Unit tests are code, so they will contain bugs (just like all other code). Fixing those bugs takes time.
In my experience unit-tests contain far more bugs than the system they are testing, and fixing these is a continuous burden.
This is about quality.
If you need to get the market – you will develop your app as quickly as possible. You can even don’t have automatic tests at all =) but you will give your app to your auditory before your competitors.
But if you know that your auditory won’t go away you will do anything you can not to disappoint them. Every bug ticket will bring down your reputation. Imagine that one bug will remove 50 percent of your reputation, the next – another 25 percent and so one. So can there be too many tests?
2
If by “is it normal” you ask if it is common, no, it certainly isn’t. A lot of dev teams have poor test practices (I belong to one) and even quality books I’ve read advice to spend roughly as much time coding the functionality than the tests. If by normal you ask if it is healthy, it depends, but two times more tests than needed is better than no test.
It doesn’t have to be critical. When you test a functionality, you test something useful for end users, and it is your responsibility to know (and not guess) it is all the time working correctly. If you need two times more for that objective, then it should be done that way – if possible.
It’s also possible your policy is sightly too strict about automated tests, but it’s hard to tell without knowing the quality they are aiming at, their ressources, and what else they could alocate it to.