I’m used to search for the Landau (Big O, Theta…) notation of my algorithms by hand to make sure they are as optimized as they can be, but when the functions are getting really big and complex, it’s taking way too much time to do it by hand. it’s also prone to human errors.
I spent some time on Codility (coding/algo exercises), and noticed they will give you the Landau notation for your submitted solution (both in Time and Memory usage).
I was wondering how they do that…
How would you do it?
Is there another way besides Lexical Analysis or parsing of the code?
This question concerns mainly PHP and or JavaScript, but I’m opened to any language and theory.
3
I was wondering how they do that… How would you do it?
I imagine that they are actually estimating the Big O measures … by running the program for different problem sizes, measuring time and space usage, and fitting curves to the results.
The problem with this approach is that it can get it wrong if the cost functions change shape as N gets large; e.g. 1000 N + N^1.5
.
Is there another way besides Lexical Analysis or parsing of the code?
Lexical analysis and parsing are not sufficient. You also need to do some reasoning about the behaviour of the algorithm. And doing that automatically for a previously unknown algorithm is hard.
4
They can’t without analysing the code.
Below examples with artificial “inflation / deflation” of complexity proves that simply measuring program runtime is not sufficient to reliably estimate Big-O
void lets_trick_runtime(int n) {
if (n == 10 || n == 25 || n == 118) {
// unfair speed-up
do_precalculated_solution_in_constant_time(n);
return;
}
if (n == 11 || n == 26 || n == 119) {
// unfair slow-down
do_some_fake_processing_in_n_cube_time(n);
return;
}
// fair solution
do_general_solution_in_quadratic_time(n);
}
Runtime estimation for above would be succeptible to giving fake estimates – constant time for values of n
where there is a pre-calculated solution and cubic time for values where unfair slow-down
kicks in – instead of “fair” quadratic time.
1
I think that this is not possible.
If you run some tests with a fixed number of different input sizes, you can easily compute a polynomial, that will approximate the runtimes you have measured very good. So you end up with a polynomial for every possible program, which would mean P = NP
(yeah! 😉 ).
If you try to do it with symbolic manipulation, you end up at the halting problem
. Since you can’t decide wether your program will ever stop, you can’t decide what runtime complexity it will have.
There may however be very special cases, where the later method is possible. But these cases maybe that small, that it’s questionable if effort ever gets paid.
1
How would I do it? The way I solve almost any problem I don’t want to sit down and solve. I simulate.
For many problems, it may be sufficient to run your algorithm many times using a variety of sizes, and then fit a regression curve to those results. That would quickly identify some particular “fixed” overhead costs of your algorithm (the intercept of the curve) and how it scales as your problem size increases.
Some tinkering will be needed to capture particularly complicated solutions, but especially if you’re just looking for a ball-park estimate, you should be able to obtain it that way, and see how your estimate differs from your actual results and decide if it’s an acceptable approximation.
The greatest weakness in my mind with this method is that if your algorithm scales really poorly, that initial “run it a whole bunch of times” step is going to get ugly. But frankly, it thats the case, that alone should be an indicator that you might want to step back and reconsider things.
My intuition is that a general solution to this problem is impossible; asserting, as it does, a priori facts about the runtime of algorithms without running them (you allude to lexical analysis). That said, it is possible for some heuristic algorithm for a (probably large) class of algorithms (since we do it all the time), but a general algorithm to do this would be equivalent to solving the Entscheidungsproblem which is well known not to be possible (cf. Church, Turing, et al.). I’m ~99.9% sure of this now that I think about it…