Im doing some experimentation with fixed-point datatype that is offered by SystemC library. When i have <24,11> or <24,10> bit configuration with 10 or 11 integer bits and the remaining bits for the fractional part, for instance, in case of <24,11>, 13 bits for the fractional part. I gain a bit of accuracy but when i have <32,16> bit configuration, i loss a bit of accuracy. Normally, higher precision means higher accuracy but i dont know why i loss accuracy in this case. Can someone explain why?
Also I measure the accuracy based on the original results taken using float datatype
When i try with 32,10, i dont loss any aaccuracy.