I got an Intel RealSense D435 deep camera, so I write a simple demo as blow to learn its use
import pyrealsense2 as rs
import numpy as np
import cv2
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
pipeline.start(config)
try:
while True:
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
if not depth_frame or not color_frame:
continue
depth_image = np.asanyarray(depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())
cv2.imshow('Depth Image', depth_image)
cv2.imshow('Color Image', color_image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
finally:
pipeline.stop()
cv2.destroyAllWindows()
But it only shows few frames and exits with errors blow:
enter image description here
I tried to lengthen the time pipeline wait for frame as its default value is 5000ms,and I also tried to add assert judge–if this error occurs, reset the camera and pipeline, it has some help, but usefulness, because it cost a lot of time to display a frame, and most of the time it shows the error as the picture above shows.
My CPU is intel i7-13650HX, so I didn’t think it related to my laptop’s computation ability.
杨晟军 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.