I’m working on a solution that needs to analyze RTSP video streaming from IP cameras. However, I had problems accessing these cameras via RTSP and the team provided me with some sample files (mp4 files). I still need to work on this solution and so as not to change my architecture, is there a way to simulate a video stream from one of these mp4 files using an RTSP url and python? My solution architecture is entirely on GCP and I need this RTSP url to work there too.
I’m sorry if this is a stupid question, but it’s outside my area of expertise.
Since I work with GCP, I know that using Vertex AI Vision I would be able to simulate streaming through an mp4 file, but I can’t use it (this tool’s streaming location needs to be in us-central1 and europe-west4, and these locations are not allowed for my project).
Samara Prass is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
You can start a local client-side player for the streaming, in terms of analyzing from IP cameras, some additional work needs to be done, if the cameras are open source or library based video sources, the driver or controller should have an option to access to the video streaming source live anywhere in the hosting machine (just thinking if the live video is served through a server url), some cameras may not share that source easily just with their own software or platform, we will assume the video is served through a server and a port:
Server and client-side result:
Now because we need to implement it in GCP my recommendation is to first deploy a Linux or linux OS based virtual machine or container with compatible runtime stack for all the resources needed to make it stream (fastest use case a VM but then you will need to authorize a few firewall rules and ports in that VM in GCP): https://cloud.google.com/compute/docs/create-linux-vm-instance
The full code needs some additional work, you can do a live listener to the IP cameras, and constantly save the parts of the video and stream them simultaneously or from local device record the file in video and upload it to your GCP Cloud storage: https://cloud.google.com/storage/docs/creating-buckets
To upload it in the context file-to-cloud-to-stream you can use: https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python
from google.cloud import storage
def upload_blob(bucket_name, source_file_name, destination_blob_name):
"""Uploads a file to the bucket."""
# The ID of your GCS bucket
# bucket_name = "your-bucket-name"
# The path to your file to upload
# source_file_name = "local/path/to/file"
# The ID of your GCS object
# destination_blob_name = "storage-object-name"
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
# Optional: set a generation-match precondition to avoid potential race conditions
# and data corruptions. The request to upload is aborted if the object's
# generation number does not match your precondition. For a destination
# object that does not yet exist, set the if_generation_match precondition to 0.
# If the destination object already exists in your bucket, set instead a
# generation-match precondition using its generation number.
generation_match_precondition = 0
blob.upload_from_filename(source_file_name, if_generation_match=generation_match_precondition)
print(
f"File {source_file_name} uploaded to {destination_blob_name}."
)
The full code (capturing camera streaming into a video/serving RSTP):
#!/usr/bin/env python
import sys
#sudo apt install gir1.2-gst-rtsp-server-1.0
#the script to save the video to a file to store, from RSTP the cameras will be:
##pip install av
#script to save stream to a file
##import av
##
##source = av.open('rtsp://127.0.0.1:8554/sailormoon', metadata_encoding='utf-8') #this one comes from the gi RSTP server
##output = av.open('test-2-temp.mp4', mode='w', format='h264') #to later serve this recorded video from the cameras stream, you can reinitiate the gi server for the new video,
##
##in_to_out = {}
##
##for i, stream in enumerate(source.streams):
##
## if (
## (stream.type == 'audio') or
## (stream.type == 'video') or
## (stream.type == 'subtitle') or
## (stream.type == 'data')
## ):
## in_to_out[stream] = ostream = output.add_stream(template=stream)
## ostream.options = {}
##
##count = 0
##for i, packet in enumerate(source.demux()):
## try:
## if packet.dts is None:
## continue
##
## packet.stream = in_to_out[packet.stream]
## output.mux(packet)
## count += 1
## if count > 200:
## break
##
## except InterruptedError:
## output.close()
## break
##
##output.close()
import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0')
from gi.repository import Gst, GstRtspServer, GObject, GLib
loop = GLib.MainLoop()
Gst.init(None)
class Jbsidis(GstRtspServer.RTSPMediaFactory):
def __init__(self):
GstRtspServer.RTSPMediaFactory.__init__(self)
def do_create_element(self, url):
#where is the mp4 video, in GCP you can specify the bucket
src_demux = "filesrc location=/home/jbsidis/Downloads/sailormoon.mp4 ! qtdemux name=demux"
h264_transcode = "demux.video_0"
#this will initialize the video, but if more codecs needed add #h264_transcode = "demux.video_0 ! decodebin ! queue ! x264enc"
pipeline = "{0} {1} ! queue ! rtph264pay name=pay0 config-interval=1 pt=96".format(src_demux, h264_transcode)
print("Serving RSTP video: " + pipeline)
return Gst.parse_launch(pipeline)
class ServingRSTP():
def __init__(self):
self.rtspServer = GstRtspServer.RTSPServer()
factory = Jbsidis()
factory.set_shared(True)
mountPoints = self.rtspServer.get_mount_points()
mountPoints.add_factory("/sailormoon", factory)
self.rtspServer.attach(None)
if __name__ == '__main__':
s = ServingRSTP()
loop.run()
There will be some cost involved in the GCP resources in the cloud and for bandwidth, always check the pricing.
jbsidis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.