condition
ubuntu 20.04
mediapipe-0.10.11
trying to buid the facelandmark landmark pipeline.
I modify the code
mediapipe-0.10.11/mediapipe/modules/face_landmark/face_landmark_gpu.pbtxt
modified face_landmark_gpu.pbtxt
type: "FaceLandmarkGpu"
#GPU image. (GpuBuffer)
input_stream: "IMAGE:image"
# ROI (region of interest) within the given image where a face is located.
# (NormalizedRect)
input_stream: "ROI:roi"
# Whether to run face mesh model with attention on lips and eyes. (bool)
# Attention provides more accuracy on lips and eye regions as well as iris
# landmarks.
input_side_packet: "WITH_ATTENTION:with_attention"
# 468 or 478 facial landmarks within the given ROI. (NormalizedLandmarkList)
#
# Number of landmarks depends on the WITH_ATTENTION flag. If it's `true` - then
# there will be 478 landmarks with refined lips, eyes and irises (10 extra
# landmarks are for irises), otherwise 468 non-refined landmarks are returned.
#
# NOTE: if a face is not present within the given ROI, for this particular
# timestamp there will not be an output packet in the LANDMARKS stream. However,
# the MediaPipe framework will internally inform the downstream calculators of
# the absence of this packet so that they don't wait for it unnecessarily.
output_stream: "LANDMARKS:face_landmarks"
# MediaPipe graph configuration
node {
calculator: "GpuBufferToImageFrameCalculator"
input_stream: "input"
output_stream: "image_frame"
}
node {
calculator: "ColorConvertCalculator"
input_stream: "image_frame"
output_stream: "image_rgb"
}
node {
calculator: "ImageFrameToGpuBufferCalculator"
input_stream: "image_rgb"
output_stream: "image_gpu"
}
# Transforms the input image into a 192x192 tensor.
node: {
calculator: "ImageToTensorCalculator"
input_stream: "image_gpu"
input_stream: "NORM_RECT:roi"
output_stream: "TENSORS:input_tensors"
options: {
[mediapipe.ImageToTensorCalculatorOptions.ext] {
output_tensor_width: 192
"face_landmark_gpu.pbtxt" 204L, 6193B
code
import cv2
import mediapipe as mp
mp_drawing = mp.solutions.drawing_utils
mp_face_mesh = mp.solutions.face_mesh
file_list = ['image.png']
# For static images:
drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1)
with mp_face_mesh.FaceMesh(
static_image_mode=True,
min_detection_confidence=0.5) as face_mesh:
for idx, file in enumerate(file_list):
image = cv2.imread(file)
# Convert the BGR image to RGB before processing.
results = face_mesh.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
# Print and draw face mesh landmarks on the image.
if not results.multi_face_landmarks:
continue
annotated_image = image.copy()
for face_landmarks in results.multi_face_landmarks:
print('face_landmarks:', face_landmarks)
mp_drawing.draw_landmarks(
image=annotated_image,
landmark_list=face_landmarks,
connections=mp_face_mesh.FACE_CONNECTIONS,
landmark_drawing_spec=drawing_spec,
connection_drawing_spec=drawing_spec)
And got following error
with mp_face_mesh.FaceMesh(
File "/usr/local/lib/python3.10/dist-packages/mediapipe/python/solutions/face_mesh.py", line 95, in __init__
super().__init__(
File "/usr/local/lib/python3.10/dist-packages/mediapipe/python/solution_base.py", line 235, in __init__
validated_graph.initialize(
RuntimeError: ValidatedGraphConfig Initialization failed.
ColorConvertCalculator::GetContract failed to validate:
For input streams ValidatePacketTypeSet failed:
Tag "" index 0 was not expected.
For output streams ValidatePacketTypeSet failed:
Tag "" index 0 was not expected.
ImageToTensorCalculator: ; RET_CHECK failure (mediapipe/calculators/tensor/image_to_tensor_calculator.cc:145) kIn(cc).IsConnected() ^ kInGpu(cc).IsConnected()One and only one of IMAGE and IMAGE_GPU input is expected.
“It seems that the data types between nodes must match in the configuration for MediaPipe to function correctly. However, due to the lack of comprehensive documentation on MediaPipe, it’s challenging to understand how to resolve the errors related to data type mismatches. More detailed guidelines or examples from MediaPipe would greatly help in addressing these configuration issues.”