'Applying MPSImageGaussianBlur with depth data

I am trying to create an imitation of the portrait mode in Apple's native camera.

The problem is, that applying the blur effect using CIImage with respect to depth data, is too slow for the live preview I want to show to the user.

My code for this is mission is:

func blur(image: CIImage, mask: CIImage, orientation: UIImageOrientation = .up, blurRadius: CGFloat) -> UIImage? {
    let start = Date()
    let invertedMask = mask.applyingFilter("CIColorInvert")
    
    let output = image.applyingFilter("CIMaskedVariableBlur", withInputParameters: ["inputMask" : invertedMask,
                                                                                    "inputRadius": blurRadius])
    
    guard let cgImage = context.createCGImage(output, from: image.extent) else {
        return nil
    }
    let end = Date()
    let elapsed = end.timeIntervalSince1970 - start.timeIntervalSince1970
    print("took \(elapsed) seconds to apply blur")
    return UIImage(cgImage: cgImage, scale: 1.0, orientation: orientation)
}

I want to apply the blur on the GPU for better performance. For this task, I found this implementation provided by Apple here.

So in Apple's implementation, we have this code snippet:

/** Applies a Gaussian blur with a sigma value of 0.5.
 This is a pre-packaged convolution filter.
 */
class GaussianBlur: CommandBufferEncodable {
    let gaussian: MPSImageGaussianBlur

    required init(device: MTLDevice) {
        gaussian = MPSImageGaussianBlur(device: device,
                                    sigma: 5.0)
    }

    func encode(to commandBuffer: MTLCommandBuffer, sourceTexture: MTLTexture, destinationTexture: MTLTexture) {
        gaussian.encode(commandBuffer: commandBuffer,
                    sourceTexture: sourceTexture,
                    destinationTexture: destinationTexture)
    }
}

How can I apply the depth data into the filtering through the Metal blur version? Or in other words - how can I achieve the first code snippets functionality, with the performance speed of the second code snippet?



Solution 1:[1]

For anyone still looking you need to get currentDrawable first in draw(in view: MTKView) method. Implement MTKViewDelegate

func makeBlur() {
    device = MTLCreateSystemDefaultDevice()
    commandQueue = device.makeCommandQueue()
    
    selfView.mtkView.device = device
    selfView.mtkView.framebufferOnly = false
    selfView.mtkView.delegate = self
    
    let textureLoader = MTKTextureLoader(device: device)
    if let image = self.backgroundSnapshotImage?.cgImage, let texture = try? textureLoader.newTexture(cgImage: image, options: nil) {
        sourceTexture = texture
    }
}

func draw(in view: MTKView) {
    if let currentDrawable = view.currentDrawable,
       let commandBuffer = commandQueue.makeCommandBuffer() {
           let gaussian = MPSImageGaussianBlur(device: device, sigma: 5)
           gaussian.encode(commandBuffer: commandBuffer, sourceTexture: sourceTexture, destinationTexture: currentDrawable.texture)
           commandBuffer.present(currentDrawable)
           commandBuffer.commit()
    }
}

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Amber K