I have this block of code that solves the LeetCode problem #1838:
nums.sort()
res,curSum = 0,0
l = 0
for r in range(len(nums)):
total = nums[r] * (r-l+1) # the goal
curSum += nums[r] # what we currently have
# check if we have enough operations to reach our goal
while total - curSum > k:
# remove L until we have valid subsequence
curSum -= nums[l]
l += 1
total = nums[r] * (r-l+1)
res = max(res, r-l+1)
return res
It says online and on discussion boards that this approach has a runtime of O(nlogn) since it is bottlenecked by the sorting of the nums array but I do not understand the calculation.
I assumed that the for loop had a runtime of O(n) and that the inner while loop had a worst-case runtime of O(n) also. As a result, wouldn’t the time complexity of the program be O(nlogn) + [O(n) * O(n)] = O(nlogn) + O(n^2) = O(n^2)? My question is why the loops’ combined runtime is O(n) instead of O(n^2). I would really appreciate if someone could help explain this idea to me!