'Using FFMPEG command to read the frame and show using the inshow function in opencv
I am trying to get the frame using the ffmpeg command and show using the opencv function cv2.imshow(). This snippet gives the black and white image on the RTSP Stream link . Output is given below link [ output of FFmpeg link]. I have tried the ffplay command but it gives the direct image . i am not able to access the frame or apply the image processing.
import cv2
import subprocess as sp
command = [ 'C:/ffmpeg/ffmpeg.exe',
'-i', 'rtsp://192.168.1.12/media/video2',
'-f', 'image2pipe',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo', '-']
import numpy
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)
while True:
raw_image = pipe.stdout.read(420*360*3)
# transform the byte read into a numpy array
image = numpy.fromstring(raw_image, dtype='uint8')
image = image.reshape((360,420,3))
cv2.imshow('hello',image)
cv2.waitKey(1)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
Solution 1:[1]
You're using a wrong output format, it should be -f rawvideo. This should fix your primary problem. Current -f image2pipe wraps the RGB data in an image format (donno what it is maybe BMP as rawvideo codec is being used?) thus not shown correctly.
Other tips:
- If your data is grayscale, use
-pix_fmt grayand read420*360bytes at a time. - Don't know the difference in speed, but I use
np.frombufferinstead ofnp.fromstring pipe.stdout.flush()is a dangerous move IMO as the buffer may have a partial frame. Consider settingbufsizeto be an exact integer multiple of framesize in bytes.- If you are expecting processing to take much longer than input frame rate, you may want to reduce the output framerate
-rto match the processing rate (to avoid extraneous data transfer from ffmpeg to python)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | kesh |
