'Issue with CameraX output on some devices models

i am using camerax in my android application. when I capture photos in debug mode, everything is fine. but when i install the app from google play store, there is a very limited number of phone models not capturing photos properly (Huawei Y5 2019).

The problem is the length and the width of the captured images are always equals to 0.

I tried to test the app (debug mode) on Huawei Y5 2019 and everything was fine, I changed build variants to release mode and tried and also everything was fine.

I tried to create a release apk and installed it directly on the phone and everything went well.

I tried to create an aab release but I couldn't install it.

the problem only occurs when i download the app from google play store.

I also noticed that there is a very limited number of old samsung models where the captured image is all black.

can someone help me please? I want to solve this problem for users because capturing images is a necessary function of the application, and there are many users who have had this problem

I use these versions of camera api (I had this problem even in previous versions).

implementation 'androidx.camera:camera-core:1.0.2'
implementation 'androidx.camera:camera-camera2:1.0.2'
implementation 'androidx.camera:camera-lifecycle:1.0.2'
implementation 'androidx.camera:camera-view:1.1.0-rc01'
implementation 'androidx.camera:camera-extensions:1.1.0-rc01'

And this is my code

/**
 *  Initialize CameraX, and prepare to bind the camera use cases
 */
private fun setupCamera()
{
    val cameraProviderFuture : ListenableFuture<ProcessCameraProvider> = ProcessCameraProvider.getInstance(this)
    
    cameraProviderFuture.addListener({
        
        cameraProvider = cameraProviderFuture.get()
        
        lensFacing = when
        {
            hasBackCamera() -> CameraSelector.LENS_FACING_BACK
            hasFrontCamera() -> CameraSelector.LENS_FACING_FRONT
            else -> throw IllegalStateException("Back and front camera are unavailable")
        }
        
        bindCameraUseCases()
        setupCameraGestures()
        
    }, ContextCompat.getMainExecutor(this))
}


/**
 *  Declare and bind preview, capture and analysis use cases.
 */
private fun bindCameraUseCases()
{
    lifecycleScope.launch {
        
        val cameraProvider : ProcessCameraProvider = cameraProvider ?: throw IllegalStateException("Camera initialization failed.")
        
        val defaultCameraSelector : CameraSelector = CameraSelector.Builder()
            .requireLensFacing(lensFacing)
            .build()
        
        val finalCameraSelector : CameraSelector = try
        {
            // Try to apply extensions like HDR, NIGHT.
            val extensionsManager : ExtensionsManager = ExtensionsManager.getInstanceAsync(this@ImageCaptureActivity, cameraProvider).await()
            
            extensionsManager.getExtensionEnabledCameraSelector(defaultCameraSelector, ExtensionMode.HDR)
        }
        catch (exception : Exception)
        {
            defaultCameraSelector
        }
        
        preview = Preview.Builder()
            // We request aspect ratio but no resolution
            .setTargetAspectRatio(AspectRatio.RATIO_16_9)
            // Set initial target rotation
            //.setTargetRotation(rotation)
            .build()
        
        imageCapture = ImageCapture.Builder()
            // Set initial target rotation, we will have to call this again if rotation changes
            // during the lifecycle of this use case
            //.setTargetRotation(rotation)
            //.setFlashMode(ImageCapture.FLASH_MODE_AUTO)
            .setCaptureMode(ImageCapture.CAPTURE_MODE_MAXIMIZE_QUALITY)
            // We request aspect ratio but no resolution to match preview config, but letting
            // CameraX optimize for whatever specific resolution best fits our use cases
            .setTargetAspectRatio(AspectRatio.RATIO_16_9)
            .setJpegQuality(100)
            .build()
        
        imageAnalyzer = ImageAnalysis.Builder()
            // We request aspect ratio but no resolution
            //.setTargetRotation(rotation)
            .setTargetAspectRatio(AspectRatio.RATIO_16_9)
            .build()

        imageAnalyzer?.setAnalyzer(cameraExecutor, LuminosityAnalyzer {})
        
        // Must unbind the use-cases before rebinding them
        cameraProvider.unbindAll()
        
        try
        {
            // A variable number of use-cases can be passed here -
            // camera provides access to CameraControl & CameraInfo
            camera = cameraProvider.bindToLifecycle(this@ImageCaptureActivity, finalCameraSelector, preview, imageCapture, imageAnalyzer)
            
            // Attach the viewfinder's surface provider to preview use case
            preview?.setSurfaceProvider(binding.cameraPreview.surfaceProvider)
            
            setupAutofocus()
        }
        catch (exception : Exception)
        {
            exception.printStackTrace()
        }
    }
}


private fun setupAutofocus()
{
    val autoFocusPoint : MeteringPoint = SurfaceOrientedMeteringPointFactory(1f, 1f)
        .createPoint(.5f, .5f)
    
    val autoFocusAction : FocusMeteringAction = FocusMeteringAction.Builder(autoFocusPoint, FocusMeteringAction.FLAG_AF)
        .setAutoCancelDuration(2, TimeUnit.SECONDS)
        .build()
    
    camera?.cameraControl?.startFocusAndMetering(autoFocusAction)
}


private fun buildOrientationHandler() : OrientationEventListener
{
    return object : OrientationEventListener(this)
    {
        override fun onOrientationChanged(orientation : Int)
        {
            if (orientation == ImageHeaderParser.UNKNOWN_ORIENTATION) return
            
            val rotation : Int = when (orientation)
            {
                in 45 until 135 -> Surface.ROTATION_270
                in 135 until 225 -> Surface.ROTATION_180
                in 225 until 315 -> Surface.ROTATION_90
                else -> Surface.ROTATION_0
            }
            
            imageAnalyzer?.targetRotation = rotation
            imageCapture?.targetRotation = rotation
        }
    }
}


fun captureImage()
{
    if (!permissionsOk()) return
    
    preview?.setSurfaceProvider(null)
    binding.noMove.isVisible = true
    binding.progress.show()
    
    val photoFile : File = StorageUtils.createImage(imagePrefixName)
    
    // Setup image capture metadata
    val metadata : Metadata = Metadata().also {
        it.location = locationManager.lastKnownLocation
        // Mirror image when using the front camera
        it.isReversedHorizontal = lensFacing == CameraSelector.LENS_FACING_FRONT
    }
    
    // Create output options object which contains file + metadata
    val outputOptions : ImageCapture.OutputFileOptions = ImageCapture.OutputFileOptions.Builder(photoFile)
        .setMetadata(metadata)
        .build()
    
    // Setup image capture listener which is triggered after photo has been taken
    imageCapture?.takePicture(outputOptions, cameraExecutor, object : ImageCapture.OnImageSavedCallback
    {
        override fun onImageSaved(output : ImageCapture.OutputFileResults)
        {
            binding.root.post {
                binding.progress.hide()
                binding.noMove.isVisible = false
                preview?.setSurfaceProvider(binding.cameraPreview.surfaceProvider)
            }
            
            setGalleryThumbnail(photoFile)
            displayPicture(photoFile)
        }
        
        override fun onError(exception : ImageCaptureException)
        {
            exception.printStackTrace()
        }
    })
}


Solution 1:[1]

Theoretically speaking, hash maps are the fastest containers for what you're trying to achieve (with O(1)) complexity. But in practice, there are a couple of things you can do.

First of all, you can have multiple implementations using different data structures and choose to return one of these based on the given indices at runtime (using abstract classes or other similar ways). You can do this on the structures I propose below and choose one at runtime.

  1. If you know that the range of data is small (or you can detect it at runtime), Then the problem is easy. Just create a vector that has the same size as the range of data and set the ordered index in this vector:
std::vector<int> indices = {/*data*/};
auto minmax = std::minmax_element(indices.begin(), indices.end());
int min = *minmax.first, max = *minmax.second, range = max - min;
std::vector<int> index_map(range);
for (size_t i = 0; i < indices.size(); ++i) index_map[indices[i] - min] = i;

I hope you got what I'm trying to say because I feel like I didn't explain it very well.

  1. If your range of data is large but the minimum spacing between them is also larger than 1, then you can do the previous method with a small modification:
std::vector<int> indices = {/*data*/};
auto minmax = std::minmax_element(indices.begin(), indices.end());
int min = *minmax.first, max = *minmax.second, range = max - min;

// Assuming indices are sorted
int diff = std::numeric_limits<int>::max();
for (size_t i = 0; i < indices.size() - 1; ++i) diff = std::min(diff, indices[i] - indices[i + 1]);

// diff can't be zero
std::vector<int> index_map(range);
for (size_t i = 0; i < indices.size(); ++i) index_map[(indices[i] - min) / diff] = i;

Here we find the minimum spacing between indices and divide by that.

  1. Use an optimized 3rd party map that is optimized further (using vectorization, multi-threading, and other methods) like these.

  2. Maybe you can try to use a weaker but faster hash function since the number of indices are not large.

I'll add to the list if I think of anything else.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1