'Can WebRTC be used in the server side for fetching stream frames?
I would like to make a camera stream from the browser (PC / Mac / Android), to a Java server in order to run some CV algo on the frames?
Is WebRTC would be a good solution for that, I know that it's mostly used for a 2-endpoint communication solution. But I was wondering if I can use it only for one direction for a Java server that somehow can handle each frame of the stream?
Any known solutions \ project?
Is there another good tech (both cross platform client and Java server) for live streaming and analysis?
Solution 1:[1]
There are two approaches:
1) ignore WebRTC and its peer-to-peer (or peer-to-server) capabilities, grab the frame locally and use HTTP to send it to the server. https://webrtchacks.com/webrtc-cv-tensorflow/ has the details for that.
2) use WebRTC to transfer an actual stream to the other side. This requires the server to understand WebRTC. https://doc-kurento.readthedocs.io/en/6.9.0/tutorials/java/tutorial-magicmirror.html shows an example, even written in Java.
The http-post approach is simpler but the bandwidth requirements make it somewhat unsuitable for high-fps applications.
Solution 2:[2]
This is answered infinite times all over the web; of course you can get the frames when you implement the WebRTC in your server code, most probably by using Google's WebRTC native API.
So it's still peer-to-peer, but one peer is a web browser, another peer is your server. The real issue that probably invalidates your idea, is that you are receiving a highly compressed video (VP8/VP9/H264), so you will get raw frames after decompression, so what quality CV can you expect from working on these frames?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Philipp Hancke |
| Solution 2 | pdoherty926 |
