'How can I dynamically adjust the frame size and the display frame region in OpenCV camera capture?

I was trying to build a model that can dynamically adjust the display region of the camera capture in OpenCV according to the detections. I found frame and resolution resizing methods, but what if I want to focus on a particular region of the entire capture? How can I do that?

I tried the cv2.resize() method, and the cap.set() method, which changed the frame size and the resolution respectively, but I could not make the feed to get focused on a particular region of the entire captured frame



Solution 1:[1]

If I've got your idea correct, you want to crop a part of an image with coordinates based on your detection. OpenCV represents images with arrays, so with exaple image:

import cv2
import matplotlib.pyplot as plt


img = cv2.imread('/content/drive/MyDrive/1.png')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)

print(img.shape)

enter image description here

Then you just access array's part by indexing and get its cropped part:

plt.imshow(img[250: 450, 250: 450])

enter image description here

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 K0mp0t