'How to determine real-world translation estimate from unit vector returned by ‘recoverPose’
I am very desperate for some help on this problem: I am implementing a monocular VSLAM system and have currently gotten up to the stages of matching features and descriptors between images, determining the essential matrix and using the recoverPose function to determine the rotation and translation between the images. However, the translation returned is in the form of a unit vector (i.e. is not in mm, cm, m, etc, which is what i want).
Given that I have no external sensors that can be used BUT i have the camera's intrinsic matrix, and a list of tracked features and keypoints, how can I get the actual distance moved?
(Not that it's integral at all, but for the source code: https://github.com/daleksla/salih_slam/blob/master/src/pose.cpp)
Solution 1:[1]
Monocular slam provides no scale information, and therefore you cannot retrieve metrical translation without additional information about the scene. Examples of such information: the size of an object seen in the scene, the distance between identified points.
In some applications this is a "feature, not a bug". For example, in movie visual effects it's what allows one to composite scale-free synthetic CG on top of the images.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Francesco Callari |
