I am working with some image data so the images are represented as numpy arrays with shape (1404, 1404, 3). The image is of type np.uint8 (it has RGB values from 0 to 255)
I am wondering about the difference in speed between these 2 lines of code:
normalized_image = image.astype("float32") / 255.0
normalized_image = image / np.float32(255)
After testing these lines on 100 different images, I got the following times for the first line of code:
As for the second line of code:
Why is the second line of code so much faster? Aren’t they both technically converting the image from uint8 to float32 numpy arrays and performing vectorized division?
I tried other order of operations but this one seemed to have a big difference. Any numpy experts know the answer to this?