I have a task to analyze to effects of quantization in Madgwick filter and Mahony filter which are two orientation estimation algorithms. Madgwick uses gradient descent optimisation technique which adjusts the parameters of the orientation quaternion (which represents the rotation from the IMU frame to the Earth frame) in small steps to minimize this error function. By iteratively updating the quaternion based on the gradient of the error function, the algorithm converges towards the optimal orientation estimate. On the other hand, Mahony uses Proportional Integral controller to cope up with the gyroscopic drift. When using sc_fixed datatype and reducing the number of bits, mahony performs far worst than Madgwick. Can anyone please explain why is that?