'Compute Fundamental Matrix of 2 Calibrated Cameras
I have 2 calibrated cameras: (R1, T1, K1) and (R2, T2, K2) in which R:3x3 rotation matrix to world coordinate, T:3x1 translation matrix to world coordinate, K: 3x3 intrinsic matrix. I want to compute fundamental matrix F that converts a point in Camera1 to a line in Camera2 (epiline). Here is what I do:
import numpy as np
def get_fundamental_matrix(R1, T1, K1, R2, T2, K2):
# compute transformation matrix from world coordinate to camera system
P1 = np.eye(4)
P1[:3,:3] = R1
P1[:3, 3] = T1
P2 = np.eye(4)
P2[:3,:3] = R2
P2[:3, 3] = T2
# compute transformation matrix from camera2 to camera1
P = P1 @ np.linalg.inv(P2)
R = P[:3,:3]
T = P[:3, 3]
def skew(x):
x = x.flatten()
return np.array([[ 0,-x[2], x[1]],
[ x[2], 0, -x[0]],
[-x[1], x[0], 0]])
# essensial matrix
E = skew(T) @ R
F = np.linalg.inv(K1).T @ E @ np.linalg.inv(K2)
F = F/F[2, 2]
return F
However, the F is not similar as the F I get from OpenCV using 8 points method. What did I do wrong?
Solution 1:[1]
Actually, there is no issue with this code. The only thing that needs to add here is to consider distortion of camera. If distortion is added, the Epiline becomes more accurate
The better solution is calculating Fundamental Matrix from 2 Projection matrices. You can refer to opencv library for this function
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 |