When running the demo of selfie segmentation(https://ai.google.dev/edge/mediapipe/solutions/vision/image_segmenter),
it seems that simple selfie segmentation fails to tell man’s head from chair. Multiclass selfie segmentation works greatly so that I want to implement the model. However, it seems that the mediapipe library does not support any functions for it.
mp_human_seg = mp.solutions.selfie_segmentation.SelfieSegmentation()
result_selfie = mp_man_seg.process(image)
self_seg = np.array(result_selfie.segmentation_mask)
#print(np.max(self_seg),np.min(self_seg),type(self_seg))
self_seg[self_seg<0.5]=0
cv2.imshow('segMask',self_seg)
cv2.waitKey(1)
The picture following is result of running this code. I marked the chair part that is classified as human body. In case of multiclass selfie segmentation demo, it is clearly eliminated.
I want not to install tensorflow module but only mediapipe, since tensorflow makes the program heavy. Is there anyone that has experience of implementing multiclass selfie segmentation from mediapipe ?
I downloaded the weight file the published.