'Numpythonic way for applying convolution kernel
I would like to know if there is another way of applying the kernel using some non-standard numpy function.
Complexity is not going to be reduced since the operation involves applying to an image with size N x M a kernel of size K x K. So this is O(K² x N x M). Despite this maybe some time reduction by a factor could be achieved eliminating for loops.
def apply_gaussian_kernel(img, kernel_size, max_percentage = 0.2):
"""Application of the gaussian kernel to an RGB image
max_percentage : percentage of the maximum value of any pixel on any channel
to define standard deviation
"""
stdev = np.max(img)*max_percentage
shape = img.shape
N,M,RGB = shape
kernel = gaussian_kernel(kernel_size)
padding = kernel.shape[0]//2 #assuming odd sized squared kernel
img = apply_padding(img, padding)
output = np.zeros(shape=(N,M,RGB))
for n in range(N):
for m in range(M):
for rgb in range(RGB):
nk = n+padding
mk = m+padding
result = np.sum(kernel * img[nk-padding:nk+padding+1,mk-padding:mk+padding+1,rgb])
output[n][m][rgb] = result
return output
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
