I was asked to calculate the running time of an algorithm which finds the square root and cube root of a given number.
Is it possible to apply Master theorem with regards to this? First, I need to build the recursive relation to it.
Recursive algorithm : (for square root)
The cube root is also similar with slightest modification
float sqrt_recursion(float n, float low, float high) {
int mid = (low + high) / 2;
if (Math.abs(mid*mid-n)<0=0.00001) {
return mid;
} else {
if ((mid * mid) > n)
return sqrt_recursion(n, low, mid - 1);
else
return sqrt_recursion(n, mid + 1, high);
}
}
Recursive algorithm : (for cube root)
float cbrt_recursion(float n, float low, float high) {
int mid = (low + high) / 2;
if (Math.abs(mid*mid*mid-n)<0=0.00001) {
return mid;
} else {
if ((mid * mid *mid) > n)
return cbrt_recursion(n, low, mid - 1);
else
return cbrt_recursion(n, mid + 1, high);
}
}
Above code uses the Newton-Raphson method to calculate the square root.
My Analysis :
I could not exactly determine in which order the loop is executed. It is not exactly as O(log n) , because the mid value is not always reducing rather it moves up and down.
Accodring to me , it do not converge to result as qucikly as binary search does.
Is above analysis correct ?
Please help me in finding the exact running time
10
Your current recursive code is slightly incorrect, because the type of mid
is int
and not float
. Once corrected, a recursive version of the iterative algorithm is:
final static float EPSILON = 0.00001;
static float sqrt(float n) {
// Here, I use high = n for simplicity's sake.
// In a real application, I'd special-case n = 0, …, 4
// and start with high = n/2
return recursiveSqrt(n, 0, n);
}
static float recursiveSqrt(float n, float low, float high) {
float mid = (low + high) / 2;
float midSquare = mid * mid;
// This is the base case of the recursion
// which corresponds to a loop condition.
if (math.abs(midSquare - n) < EPSILON) return mid;
// Anything that's not the base case recurses.
// Here, we recurse either into the left or right half
// of the currently processed interval
if (midSquare > n) return recursiveSqrt(n, low, mid);
else return recursiveSqrt(n, mid, high);
}
Now, the Master Theorem is interested not in the specific implementation of the recursive function, but in three properties:
- What is the running time at each step, excluding the recursion?
- How often do we recurse per invocation?
- How does the problem size shrink with each invocation?
Together, these define the parameters f(n), a, and b in the Master-Theorem equation
T(n) = a · T(n/b) + f(n)
that describes the complexity of some recursive functions. Note that the f(n)
function is also a complexity expression. It describes the per-iteration overhead, which may depend on the problem size in each iteration.
The answer to the first question is that each invocation has constant running time (recursion excluded), because arithmetic operators, comparison operators, and Math.abs()
take constant time. The only way to have non-constant time is to use loops or to invoke a function or method that has non-constant time – and we don’t do any of that. Therefore, f(n) = O(1)
.
Next, we have to determine how often we recurse. While the code contains two invocations of the recursive function, only one of them will be executed in any iteration. So a = 1
.
Finally, we have to look at the problem size. The problem size of each iteration is not our parameter n
. Instead, it is the range high - low
. At each iteration, you halve this range by calculating the midpoint. Therefore, b = 2
. It doesn’t matter where this range is and whether an iteration picks the lower or upper half. Only the size of the range matters. The initial size of the range is n
.
Now we have determined all parameters necessary to apply the Master Theorem.
Our parameters do not match the first case where f(n) = O(nc)
for some parameter c
with c < logb(a)
: Here we have c = logb(a) = 0
.
However, it immediately matches the second case where f(n) = Θ(nc · logk(a))
with k ≥ 0
and c = logb(a)
(The character Θ
that looks like a zero is actually the Greek letter uppercase Theta). Here, c = k = 0
.
Then, the total complexity is T(n) = Θ(nc · logk + 1(n)) = Θ(log n)
.
Interpretation: This method for approximating a square root is essentially a binary search over the ordered sequence of all numbers between zero and n
. This sequence is discrete, and all numbers in the sequence are 2·epsilon
apart. At each iteration, we have lower and upper bounds delimiting a sublist which must contain the wanted list item. Because the sequence is ordered, we look at the item in the middle of the sub-sequence and then continue searching in the left or right sublist, depending on whether the middle item was larger or smaller than the wanted item. Consequently, this algorithm must have the same time complexity characteristics as a binary search.
2