'Stream to Facebook Live using OpenCV
I am planning to stream a video file to Facebook Live but I want to programmatically edit its frames like adding texts depending. My problem is that I don't know how to properly send data to Facebook Live. I tried ffmpeg but it doesn't work.
Here is my code that I tried
import subprocess
import cv2
rtmp_url = "rtmps://live-api-s.facebook.com:443/rtmp/FB-1081417119476224-0-AbwwMK91tFTjFy2j"
path = "7.mp4"
cap = cv2.VideoCapture(path)
# gather video info to ffmpeg
fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# command and params for ffmpeg
command = ['ffmpeg',
'-y',
'-f', 'rawvideo',
'-vcodec', 'rawvideo',
'-pix_fmt', 'bgr24',
'-s', f"{width}x{height}",
'-r', str(fps),
'-i', '-',
'-c:v', 'libx264',
'-pix_fmt', 'yuv420p',
'-preset', 'ultrafast',
'-f', 'flv',
rtmp_url]
# using subprocess and pipe to fetch frame data
p = subprocess.Popen(command, stdin=subprocess.PIPE)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
print("frame read failed")
break
# YOUR CODE FOR PROCESSING FRAME HERE
# write to pipe
p.stdin.write(frame.tobytes())
Solution 1:[1]
If I understand your description correctly, each output from the first pipeline will contain an array of all the elements for a particular user for a particular window already. So the data is already partitioned. If that is the input to the second pipeline, then you can simply read each file and process the contents as desired.
On the other hand, if your second pipeline is unrelated and has all the data for a user in a single array but they are not grouped into windows, then you will need to do some work. It depends on the size of the data. Since it is a single numpy array, I will assume it is manageable in memory. In this case, the most efficient thing would be to Do It Yourself: write a DoFn that uses numpy bulk operations to implement the assignment of individual data points to sliding windows and then group those. Of course, you'll have some data blowup. If this exceeds what you can process on a single machine, then you'll want go ahead and use Beam's primitives.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Kenn Knowles |
