I would like to perform a summation operation on a polynomial inside a cuda kernel which contains coefficients and function as given
y = a1*f1(x) + a2*f2(x) + a3*f3(x) + a4*f4(x);
My first idea is to loop over the three terms and use switch statements
float y = 0;
for (int i = 0; i < 4; i++) {
if (a[i] != 0) {
switch (i) {
case 0: y += a[i] * f1(x); break;
case 1: y += a[i] * f2(x); break;
case 2: y += a[i] * f3(x); break;
case 3: y += a[i] * f4(x); break;
}
}
}
The above solution seems to be very basic and slow when number of terms in equation can be greater than 30. Now, I am thinking of creating an array of functions to remove switch statement.
float y = 0;
for (int i = 0; i < 4; i++) {
if (a[i] != 0) {
y += a[i] * f[i](x);
}
}
All threads share same a[i]
. The function f[i]
is again another polynomial with 1 – 8 terms.
Can this be optimized?
Update 1:
I have not yet written the code just solved the polynomial. I can share some part of the polynomial here. The actual equation looks like this:
z = a1*f1(x,y) + a2*f2(x,y) + a3*f3(x,y) + a4 * f4(x,y)
x,y are the inputs that varies across the threads
coefficients a1, a2, a3, a4 are constants.
f1(x,y) = 6*x*y ;
f2(x,y) = 2*x*(4*x^2 + 4*y^2 - 3) + 16*x*y^2 ;
f3(x,y) = 10*y^2*(3*x^2 - y^2) - 2*y^2*(5*x^2 + 5*y^2 - 4) + (3*x^2 - y^2)*(5*x^2 + 5*y^2 - 4) ;
f4(x,y) = y*(60*y + 210*y*(x^2 + y^2)^2 - 240*y*(x^2 + y^2)) - 60*(x^2 + y^2)^2 + 35*(x^2 + y^2)^3 + 30*x^2 + 30*y^2 - 4 ;
14
Looking at your first few sample functions my first reaction is that what you actually want to do is compute the polynomial in x coefficients once for each problem outside of the Cuda code entirely and hand Cuda a clean set of numerical polynomial coefficients to evaluate. Effectively turning the whole thing inside out.
Basically instead of an array of functions which will be clumsy to call and glacially slow due to conditionals inside the loop you have a 2D table of function coefficients of the various 30 polynomials as functions of y
and then combine them with the weights a[i] into the working set of coefficients for Cuda to evaluate.
It may help to compute these coefficients in double precision and only round down to floats for handover to Cuda. I have concerns that the float mantissa may not be sufficient precision for your needs if |x| >> 1
.
A word of caution. Float precision may not be sufficient to compute your function if there are terms that cancel and omnimously big coefficients of higher powers of x (and y) present. You might want to test that float evaluation is useful first before investing time in making the code Cuda friendly.
If I have understood your statement of the problem you ultimately have a polynomial in x to evaluate for many x (beware radius of convergence). That polynomial P(x) depends on fixed? a[n] and another variable y.
You state that the first 4 such polynomials are:
f1(x,y) = 6*x*y ;
f2(x,y) = 2*x*(4*x^2 + 4*y^2 - 3) + 16*x*y^2 ;
f3(x,y) = 10*y^2*(3*x^2 - y^2) - 2*y^2*(5*x^2 + 5*y^2 - 4) + (3*x^2 - y^2)*(5*x^2 + 5*y^2 - 4) ;
f4(x,y) = y*(60*y + 210*y*(x^2 + y^2)^2 - 240*y*(x^2 + y^2)) - 60*(x^2 + y^2)^2 + 35*(x^2 + y^2)^3 + 30*x^2 + 30*y^2 - 4 ;
These need simplifying algebraically to their canonical forms (unless there is a specific reason why terms have to be evaluated like that which I can’t see). Optimising compilers won’t alter the stated bracketing order of evaluation (unless you enable unsafe fastmath optimisations which is risky).
In canonical form the f2, f3, f4 become Taylor series in x as follows (from Maxima):
f2(x) = 2*x*(12*y^2-3+4*x^2)
f3(x) = 12*y^2-25*y^4+(30*y^2-12)*x^2+15*x^4
f4(x) = -4+90*y^2-300*y^4+245*y^6+(525*y^4-360*y^2+30)*x^2+(315*y^2-60)*x^4+35*x^6
Basically what I suggest is that you compute the coefficients of the whole function term by term for a given y into an array of coefficients. I’ll sketch it out with these terms as an example and coefficients a1, a2, a3, a4.
// fn[] = { 1, x, x^2, x^3,x^4,... polynomial coeffs in powers of x
f1[] = { 0, 6*y, 0, 0, ... };
f2[] = { 0, 24*y^2-3, 0, 4, ...};
f3[] = { y^2*(12-25*y^2), 0, 30*y^2-12, 0, 15, ... };
f4[] = { -4+90*y^2-300*y^4+245*y^6, 0, 525*y^4-360*y^2+30, 0, 315*y^2-60, 0, 35, 0, ...}
for (int i = 0; i< NPOLY; i++)
mypoly[i] = a1*f1[i]+a2*f2[i]+a3*f3[i]+a4*f4[i];
There may be other go faster tweaks you can do if odd terms only occur occasionally in the low orders and the majority are predominantly odd or even functions. IOW rather than evaluate P(x) with every alternate term zero compute P'(x*x). It may even be worth splitting it into those with mixed terms, pure odd and pure even functions grouping like with like. Do check that precision obtained meets your requirements though!
6
Lucky for you it is very easy to measure performance on the GPU
const auto StartTime = clock64();
... do stuff
const auto EndTime = clock64();
const auto Diff = int(EndTime - StartTime);
if (threadIdx.x == 0) { printf("time = %i clockcyclesn", Diff); }
Some tips: avoid branches and avoid if statements. It is much faster to execute all the code unconditionally, because an if
statement is translated to the following pseudo code:
if x then do_y() else do_z()
->
p = bool(x)
p? y()
p? more y() //a handful number of statements will be predicated
!p? z()
!p? more z()
If you have many statements in the if
, a call will be made, calls are very expensive!
If you are able to translate the function f into an array whose values can be looked up, you and alter the code like so.
Your code can be efficiently executed like so:
__shared__ float y[1024]; //assume we have 1024 threads in a block.
__shared__ float f[4];
__shared__ float result[32];
//initialize f and y
//this looks like a loop, but can be optimized away if count == blockDim.x
for (int i = threadIdx.x; i < count; i += blockDim.x) {
j = i % 4; //deduce the `f` you need
y[i] += a[i] * f[j]; //no if/switches needed
//more calculations, do not use ifs, but extract the data needed
//from a shared array, so you +0, or *1 if you want to skip items.
//do not do division, but multiply by a (pre-calculated) 1/x
}
//reduce 32 values into 1
const auto reduce = [&](float y1)-> float{
y1 += __shfl_down_sync(-1u, y1, 1);
y1 += __shfl_down_sync(-1u, y1, 2);
y1 += __shfl_down_sync(-1u, y1, 4);
y1 += __shfl_down_sync(-1u, y1, 8);
y1 += __shfl_down_sync(-1u, y1, 16);
return y1;
}
const auto y1 = reduce(y[threadIdx.x]);
//if is fine here, because conditional assignment is the exception to
//the `do-not-use-ifs` rule.
//the optimizer will lift `threadIdx.x / 32` out of the if statement.
if (threadIdx.x & 31 == 0) { result[threadIdx.x / 32] = y1; }
__syncthreads();
const auto final_result = reduce(result[threadIdx.x]);
if (threadIdx.x == 0) { globalresult[0] = final_result; }
It is hard to advise you further without knowing the exact form of functions f_i
5