Inside the KrylovJacobian
class from SciPy, there is this method:
def _update_diff_step(self):
mx = abs(self.x0).max()
mf = abs(self.f0).max()
self.omega = self.rdiff * max(1, mx) / max(1, mf)
which would be the same as:
This modifies the step size that the finite difference method uses, however, I cannot find the origin of this expression, or why it works.
Does anybody know the source of this method or the reasoning behind it?