'How to get real time image processing using constrained high speed capture session in Android?
I'm working on a project that requires real time image processing using Android smartphone's camera (actually, the Samsung Galaxy S7).
Briefly, the main requirements/conditions are:
- I need to capture and process the images in real-time at high frame rates (ideally 120fps or 240fps, but 60fps would be a good start);
- There is no need to preview the images on display;
- I just need grayscale images (working with nv21 format, this is the first part of the image data). So, for efficiency, I would prefer not to convert the images from a native format to jpg and then decode and compute the grayscale data;
- I don't need high resolution images (640x480 would be fine) and the frame processing itself is relatively simple and can be done very fast (I just need to scan the grayscale data and extract some basic information);
I have tried a conventional capture using the Camera2 Api with an ImageReader surface, but the
best I could get was 37fps (just capturing, not processing), even when turning off the auto control mode, setting bigger ranges for the capture request,
changing the exposure time, frame duration, etc.
Now, I'm trying to solve the problem using the CameraConstrainedHighSpeedCaptureSession class from the Camera2 Api.
However, the Android reference material says that it should be used for high speed VIDEO RECORDING use case.
Also, according to the documentation, the method createConstrainedHighSpeedCaptureSession requires a surface that
must be either a video encoder surface (that could be acquired from a MediaRecorder) or a preview surface (obtained from SurfaceView, SurfaceTexture).
I think that such capture mode uses an special kind of surface/buffer (probably a faster and more efficient one).
Using a preview surface (SurfaceView, SurfaceTexture), I think I could get stuck in 60fps (the refresh rate of the display). So,
I'm looking for a way to get access to the BufferQueue under the MediaRecorder's surface, maybe using NDK/JNI.
The idea is to get access to the raw frame data passed from the camera device to MediaRecorder, before the frames be encoded for video purpose.
Please, is this possible? How it could be achieved? Is there a better way?
As an alternative, I have read a litte about FileDescriptor with the intention of redirecting the video frames generated by MediaRecorder
to a buffer in memory and then try to get access to those frames as they are generated, but it seems to be very inefficient and
the delay may not be tolerated by the application.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
