Say we have our route finding algorithm:
def myHeuristicTSP(graph):
/*implementation*/
return route
Now we want to unit test this:
class TestMyHeuristicTSP:
def testNullGraphRaiseValueError(self):
self.assertRaises(ValueError, myHueristicTSP(None))
def testSimpleTwoNodeGraphReturnsRoute:
self.assertEquals(expectedResult, myHeuristicTSP(input))
The question is, for a non-heuristic TSP algorithm, we can give a variety of graphs and check that they always return absolutely the shortest route.
But because a heurtistic algorithm, while is still deterministic, is less predictable, are simply meant to understand how the algorithm is meant to be working, and find those edge cases?
1
For a heuristic algorithm which is supposed to not return the ideal but a “good enough” solution, you would have various test-cases and check
- is the solution actually valid? You certainly want to make sure your route-finding algorithm doesn’t return paths which are impossible or which don’t actually lead from start to finish. You might not be able to prove that the solution is ideal, but you should at least be able to verify that the return value actually is a solution.
- is the solution “good enough”? You should have some requirements which define how much worse the algorithm may be than the ideal solution. You should have test cases where the ideal solution is known (or at least a solution which is considered good enough to be used as a comparison standard) and confirm that the solution provided by the algorithm is not more than x% worse.
- Is the algorithm fast enough? You often use a heuristic approach when you presume that they make up for their lack of precision by being much faster. To verify this you should measure their runtime and make sure they are indeed faster than an algorithm which gets the exact solution. Runtime measurements are always a bit fuzzy, so exceeding the expected runtime should be a warning, not an error (when your unit testing framework allows to differ between warnings and errors).
3
Most optimization algorithms (including heuristics) work on some configurations (in your example a route) by applying operations on them. The operations for itself should guarantee that they deliver only valid configurations, so first there should be unit tests for each of them. When you know for sure the optimization algorithm uses only those operations, there should typically be no need for a validity test of the algorithm’s result.
To create good unit tests for any kind of more complex algorithm, one actually has to know the algorithm itself in detail. For simple heuristics like “hill climbing” you typically can predict the outcome for small inputs. For example, for initial routes of 3 to 5 points, when given in a certain order, you can predict what will happen. This will stay true for most deterministic heuristic algorithms I know of, so that’s probably a good place to start.
For more complex algorithms, and bigger size of the input, when you just feed the input into the algorithm and try to check the output, you are actually not doing a unit test anymore, you are doing acceptance or integration test. The reason why you have problems to “unit test” such an algo is because it typically consists of a handful of smaller parts (individual units). So for really unit testing such an algorithm, you will have to identify those parts and test them individually. Additionally, you can use code coverage or branch coverage techniques to make sure you have enough test cases.
If you are not looking for unit-tests, but automated acceptance or integration tests, you can try what @Phillip suggested under (2) or (3).