🙂
so I’m trying to modify the precision used in my code. the only method (I know of) is using the “decimal” module. but I’m not really sure how to use it. I’m not entirely new to Python, but I’m still learning how to implement some of the fundamentals, so modules like this one are kind of new to me and I’d really appreciate your help 🙁
for examle, in my code, I have the following definition
def f(R):
z = R – ((c1 * (mHS ** 2) * (( ((R) / (mHS**2) )) ** nHS))) / (1 + c2 * (((R) / (mHS ** 2)) ** nHS))
return z
where all variables (except “R”) have been previously defined as constants. if I were to use the “decimal” module, aside from writing “from decimal import *” and then “getcontext().prec = 6” at the top of my code:
-
where (in the function I defined) should I put the “Decimal()” class? should I apply it to every single variable and number? should I apply it only to the whole thing my function returns (i.e. “Decimal(z)”)? should I just apply it to the numbers (there I only have “1” and “2”)? should I apply it to the argument I define in my function? probably not the last one, but I hope you get a feeling for what I’m trying to say with these questions.
-
the code I’m writing is a fourth order Runge-Kutta method. so I also use these functions (defined by me) to add and multiply other things. that is, I also operate with these functions and I need to maintain a set precision throughout the whole algorithm. what do I do then? I saw something about using “quantize()” but “Decimal()” already confuses me enough to not use “quantize()” yet.
-
aside from the last point, I want to avoid the whole “0.1 + 0.2 = 0.30000000000000004” situation in python (which I kind of understand that “Decimal” takes care of that?). and I wouldn’t like to carry extra decimals after operations are computed (e.g. always get a number with 64 “decimals” and truncate everything else after operating it. maybe 64 is a lot, but imagine a precision higher than the default double precision in python with truncated decimal values -I’m aware that higher precision will slow down the time it takes to compile/run my code). and again, I kind of understand that “Decimal” takes care of that? I just don’t know how to use it correctly.
I know some of these (if not all) might sound like dumb questions, but I’d really appreciate your help.
I have checked the documentation; but I’m not really sure how to specifically apply it to my problem.
Daniel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.