I am trying to solve the following exercise:
Consider the following recursive mergesort algorithm (another classic divide and conquer algorithm). Mergesort was first described by John Von Neumann in 1945. The basic idea is to divide an unsorted list x of m elements into two sublists of about half the size of the original list. Repeat this operation on each sublist, and continue until we have lists of size 1 in length. Then starting with sublists of length 1, “merge” the two sublists into a single sorted list. The code is:
Mergesort(m)
var list left, right, result
if length(m) <= 1
return m
else
var middle = length(m) / 2
for each x in m up to middle
add x to left
for each x in m after middle
add x to right
left = Mergesort(left)
right = Mergesort(right)
result = Merge(left, right)
return result
Merge(left, right)
var list, result
while length(left) > 0 and length(right) > 0
if first(left) <= first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
if length(left) > 0
append rest(left) to result
if length(right) > 0
append rest(right) to result
return result
The questions are:
- Assume that you have Y cores on a multi-core processor to run MergeSort. Assuming
that Y is much smaller than length(m), express the speedup factor you might expect
to obtain for values of Y and length(m). Plot these on a graph. - Next, assume that Y is equal to length(m). How would this affect your conclusions
your previous answer? If you were tasked with obtaining the best speedup factor
possible (i.e., strong scaling), explain how you might change this code to obtain it.
I tried using Amdahl’s law for parallelization but I don’t know how to get the portion of parallelizable code.
user25653293 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.