'Why does PiCamera zoom crop the sensor
I am trying to create an ROI (Region of Interest) function using PiCamera. In an attempt to understand how the "zoom" method works, I tried to create a full image from a sequence of ROI images. this is my code:
from picamera.camera import PiCamera
from picamera.array import PiRGBArray
import time
import cv2
FULL_H = 1920
FULL_W = 2560
n_rows = 3
n_cols = 4
cam = PiCamera(0)
cam.framerate = 32
cam.awb_mode = 'fluorescent'
cam.shutter_speed = 3200
cam.resolution = FULL_W, FULL_H
cam.sensor_mode = 2
frame = PiRGBArray(cam)
# Take full resolution picture
cam.capture(frame, format="bgr", use_video_port=False)
cv2.imwrite("full_img.png", frame.array)
# Set the resolution to ROI resolution and zoom to first section
cam.resolution = int(1 / n_cols * FULL_W), int(1 / n_rows * FULL_H)
cam.zoom = (0.0, 0.0, 1 / n_cols, 1 / n_rows)
frame = PiRGBArray(cam)
# Camera init time
time.sleep(2)
# Loop over rows and columns and take ROI images
rows = []
for i in range(n_rows):
row = []
for j in range(n_cols):
print("Zooming to: {}".format((j / n_cols, i / n_rows, 1 / n_cols, 1 / n_rows)))
cam.zoom = (j / n_cols, i / n_rows, 1 / n_cols, 1 / n_rows) # zoom into correct part
time.sleep(0.1) # Give camera time to zoom
frame.truncate(0)
cam.capture(frame, format="bgr", use_video_port=True) # Take ROI picture
row.append(frame.array)
rows.append(row)
# Concatinate the columns into rows and then rows into full image
full_rows = []
for row in rows:
full_rows.append(cv2.hconcat(row))
full_img = cv2.vconcat(full_rows)
cv2.imwrite("full_img_from_rois.png", full_img) # Write the assembled image to file
cam.close()
This is the output:
Zooming to: (0.0, 0.0, 0.25, 0.3333333333333333)
Zooming to: (0.25, 0.0, 0.25, 0.3333333333333333)
Zooming to: (0.5, 0.0, 0.25, 0.3333333333333333)
Zooming to: (0.75, 0.0, 0.25, 0.3333333333333333)
Zooming to: (0.0, 0.3333333333333333, 0.25, 0.3333333333333333)
Zooming to: (0.25, 0.3333333333333333, 0.25, 0.3333333333333333)
Zooming to: (0.5, 0.3333333333333333, 0.25, 0.3333333333333333)
Zooming to: (0.75, 0.3333333333333333, 0.25, 0.3333333333333333)
Zooming to: (0.0, 0.6666666666666666, 0.25, 0.3333333333333333)
Zooming to: (0.25, 0.6666666666666666, 0.25, 0.3333333333333333)
Zooming to: (0.5, 0.6666666666666666, 0.25, 0.3333333333333333)
Zooming to: (0.75, 0.6666666666666666, 0.25, 0.3333333333333333)
I split the full image into 12 square sections (3 rows, 4 columns) which result in 12 640x640 images. I then concatenate these images and try to recreate the full 2560x1920 image.
These are the images I get (The images are too big to upload directly, sorry):
The Full image - https://imgur.com/QXIMGXJ
The assembled image - https://imgur.com/EtURnML
Hopefully you can see that the assembled image seems to be "squished" and also does not cover the full range of the sensor (Some numbers are missing from both sides). The white balance in some of the pieces seems off too, which might be a clue for the solution but I can't figure out why that is.
Why does the zoom function act like that? could it be fixed somehow? Why is the white balance acting weirdly in lower parts of the image?
In case that this is unavoidable behavior of the "zoom" method, is there a way I can get the full image to show up the same way as the assembled image (cropped and squished as it may be) This is important since I would like to use the ROI images to compare to sections of the full image, and I would like them to be the same
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
