Overview:
- I want to visualize the pixel value distribution of an image that has had it’s pixel values normalized (range bounds [0, 1]) and use cv2.calcHist() to obtain the histogram.
Code:
def create_histogram(array: np.ndarray):
array_min, array_max = np.min(array), np.max(array)
print(f"Array: min {array_min} . . . max {array_max}")
hist = cv2.calcHist([array], channels = [0], mask = None, histSize = [100], ranges = [0, 1])
histo_min, histo_max = np.min(hist), np.max(hist)
print(f"Histogram: min {histo_min} . . . max {histo_max}")
return hist
Output:
Array: min 0.0 . . . max 1.0 Histogram: min 0.0 . . . max 4152712.0
Comprehension Check:
From my understanding, the original array is correct being passed to the function and read as it should be, with minimum and maximum pixel values of 1.
(1) What I do not understand, if why does the histogram have a maximum value that does not correspond with the image maximum value?
(2) If I have set the range to be [0, 1] bound in the cv2.calcHist() function, why does my histogram output an x-axis that indicates a range of beyond 4?
(3) Finally, I do not understand where this ‘1e7’ is coming from in the histogram when it is plotted [Fig. 1]enter image description here
If it is of relevance, this is the line of code that is used to generate the histogram:
plt.hist(histo_dict[k], bins = 100, color = 'deeppink', edgecolor = 'black', alpha = 0.2)
I tried to create a histogram using cv2.calcHist(), however this is not producing expected results.