I’m developing a Django application which opens the client camera via JS using navigator.MediaDevices.getUserMedia()
, send a blob to the server through websocket and then the server should process the images through mediapose library.
I’m using the following three functions respectively to:
- convert blob to cv image
def blob2image(bytes_data):
np_data = np.frombuffer(bytes_data, dtype=np.uint8)
img = cv2.imdecode(np_data, cv2.IMREAD_COLOR)
if img is None:
raise ValueError("Could not decode image from bytes")
return img
- extract markers from the img
def extractMarkers(image, pose, mp_drawing, custom_connections):
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
results = pose.process(image)
mp_drawing.draw_landmarks(
image,
results.pose_landmarks,
connections = custom_connections,
)
return image, results
- convert back the annotated image to b64_img
def image2b64(img):
_, encoded_image = cv2.imencode('.jpg', img)
b64_img = base64.b64encode(encoded_image).decode('utf-8')
return b64_img
The code runs, but keeps accumulating delay during the streaming. Is there any way to drop the buffer or to improve the performance of the code?
I tried locally the code for extracting markers extractMarkers()
, without using the Django server application getting images coming from a local camera and it works smootly.
I also tried to use the Django application to get the images from the client and sending them to the server through websocket and then only sending back to the client the images converted in B/W, and it also works smoothly.
Why just adding the marker extraction function, slows down everything and mostly causes the buffer to accumulate frames?
Giuseppe Cavallo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.