It is know that np.sum(arr) is quite a lot slower than arr.sum(). For example:
<code>import numpy as np
np.random.seed(7)
A = np.random.random(1000)
%timeit np.sum(A)
2.94 µs ± 13.8 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
%timeit A.sum()
1.8 µs ± 40.8 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
</code>
<code>import numpy as np
np.random.seed(7)
A = np.random.random(1000)
%timeit np.sum(A)
2.94 µs ± 13.8 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
%timeit A.sum()
1.8 µs ± 40.8 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
</code>
import numpy as np
np.random.seed(7)
A = np.random.random(1000)
%timeit np.sum(A)
2.94 µs ± 13.8 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
%timeit A.sum()
1.8 µs ± 40.8 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
Can anyone give a detailed code-based explanation of what np.sum(arr) is doing that arr.sum() is not?