When one is comparing the magnitudes of complex numbers (essentially sqrt(real² + imag²)) to find the largest absolute values, it would suffice to compare the square of the absolute values, thereby saving the slow sqrt() operation and making it faster.
How to do this efficiently with CuPy or otherwise? Furthermore, I did some benchmarking with the code below. (Side note: I don’t know what to make of the supposed CPU times with benchmark(), since they don’t seem realistic while the numpy version outside the benchmark gives believable timings.)
Comparing the abs_only() version (4381.648 us) with the abs_sq_temp() version that calculates the absolute value, stores it into temp, and then squares it (4744.022 us), the squaring with GPU adds a mere 362 us. So it would seem plausible that an efficient way of calculating the absolute square could take about twice that, or even closer 362 us total if the real and imag parts of the complex number could be squared concurrently in place, and then added. Yet how to do it cupy?
It is regrettable that such “square of the absolute value of complex number” is not already included in cupy or numpy…. The implementation would (likely) be identical to np.absolute() and cp.absolute, except that there would not be the final square root taken. It would be faster than the .abs().
Below the benchmark code
import cupy as cp
from cupyx.profiler import benchmark
import time
import numpy as np
# Generate a large complex array
arr = cp.random.random(10000000) + 1j * cp.random.random(10000000)
def abs_only():
return cp.absolute(arr)
def abs_sq():
return cp.absolute(arr)**2
def abs_sq_temp():
temp = cp.absolute(arr)
return temp*temp
def conj():
return arr*cp.conj(arr)
def real_imag():
return cp.real(arr)**2 + cp.imag(arr)**2
# making benchmarks
bench0 = benchmark(abs_only, n_repeat=20)
bench1 = benchmark(abs_sq, n_repeat=20)
bench2 = benchmark(abs_sq_temp, n_repeat=20)
bench3 = benchmark(conj, n_repeat=20)
bench4 = benchmark(real_imag, n_repeat=20)
print(bench0)
print(bench1)
print(bench2)
print(bench3)
print(bench4)
# sanity check with numpy gives much longer, more realistic time for CPU
arr2 = np.random.random(10000000) + 1j * np.random.random(10000000)
start_time = time.time()
plain_abs_numpy = np.abs(arr2)
time_plain_abs = time.time() - start_time
print(f"nOutside the benchmark() function, CPU takes {time_plain_abs*1e6:.3f} us with np.abs()")
''' The results:
abs_only : CPU: 19.378 us +/- 3.636 (min: 15.710 / max: 30.327) us GPU-0: 4381.648 us +/- 292.389 (min: 4261.536 / max: 5529.536) us
abs_sq : CPU: 59.606 us +/- 15.576 (min: 51.167 / max: 126.600) us GPU-0: 21369.104 us +/- 180.435 (min: 21300.129 / max: 22085.632) us
abs_sq_temp : CPU: 30.086 us +/- 2.906 (min: 27.081 / max: 36.389) us GPU-0: 4744.022 us +/- 38.082 (min: 4694.080 / max: 4829.056) us
conj : CPU: 32.216 us +/- 3.663 (min: 27.743 / max: 40.467) us GPU-0: 7396.042 us +/- 60.760 (min: 7289.728 / max: 7508.256) us
real_imag : CPU: 110.060 us +/- 20.521 (min: 99.348 / max: 195.420) us GPU-0: 38408.486 us +/- 211.628 (min: 38300.770 / max: 39266.209) us
Outside the benchmark() function, CPU takes 32956.123 us with np.abs()'''
Mikael is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.