'OpenCV2 Thin Plate Splines Apply Transformation not working?
I am using the Python OpenCV2 implenentation of the thin plate transformer and running into some issues. When I do WarpImage, the image gets warped properly, however when I then use estimateTransformation of some manually inputted points, the points do not map correctly. Instead all the points end up mapping to the exact same location. Any help would be appreciated! I've attached my code below:
splines= cv2.createThinPlateSplineShapeTransformer()
temp=splines.estimateTransformation(reference_coordinate_arr,image_marks_coordinates_arr,matches)
warpedimage=splines.warpImage(image) #image warps fine
moved_barcodes= splines.applyTransformation(image_bar_coordinates_arr)[0] #these coordinates all map to the same location
Solution 1:[1]
Thank you so much for asking this question, I've been looking very long for spline warping and never found the thinPlateTransformation in openCV.
For me, in C++ it works. I've provided some sample points, which may not be coplanar afaik.
#include <opencv2/shape/shape_transformer.hpp>
int main()
{
cv::Mat img = cv::imread("C:/data/StackOverflow/Lenna.png");
auto tps = cv::createThinPlateSplineShapeTransformer();
std::vector<cv::Point2f> sourcePoints, targetPoints;
sourcePoints.push_back(cv::Point2f(0, 0));
targetPoints.push_back(cv::Point2f(0, 0));
sourcePoints.push_back(cv::Point2f(0.5*img.cols, 0));
targetPoints.push_back(cv::Point2f(0.5*img.cols, 0.25*img.rows));
sourcePoints.push_back(cv::Point2f(img.cols, 0));
targetPoints.push_back(cv::Point2f(img.cols, 0));
sourcePoints.push_back(cv::Point2f(img.cols, 0.5*img.rows));
targetPoints.push_back(cv::Point2f(0.75*img.cols, 0.5*img.rows));
sourcePoints.push_back(cv::Point2f(img.cols, img.rows));
targetPoints.push_back(cv::Point2f(img.cols, img.rows));
sourcePoints.push_back(cv::Point2f(0.5*img.cols, img.rows));
targetPoints.push_back(cv::Point2f(0.5*img.cols, 0.75*img.rows));
sourcePoints.push_back(cv::Point2f(0, img.rows));
targetPoints.push_back(cv::Point2f(0, img.rows));
sourcePoints.push_back(cv::Point2f(0, 0.5*img.rows/2)); // accidentally unwanted y value here by 0.5 and /2
targetPoints.push_back(cv::Point2f(0.25*img.cols, 0.5*img.rows));
std::vector<cv::DMatch> matches;
for (unsigned int i = 0; i < sourcePoints.size(); i++)
matches.push_back(cv::DMatch(i, i, 0));
tps->estimateTransformation(targetPoints, sourcePoints, matches); // this gives right warping from source to target, but wront point transformation
//tps->estimateTransformation(sourcePoints, targetPoints, matches); // this gives wrong warping but right point transformation from source to target
std::vector<cv::Point2f> transPoints;
tps->applyTransformation(sourcePoints, transPoints);
std::cout << "sourcePoints = " << std::endl << " " << sourcePoints << std::endl << std::endl;
std::cout << "targetPoints = " << std::endl << " " << targetPoints << std::endl << std::endl;
std::cout << "transPos = " << std::endl << " " << transPoints << std::endl << std::endl;
cv::Mat dst;
tps->warpImage(img, dst);
cv::imshow("dst", dst);
cv::waitKey(0);
};
Giving this result:
[0, 0;
128, 0;
256, 0;
256, 256;
256, 512;
128, 512;
0, 512;
0, 128]
targetPoints =
[0, 0;
128, 128;
256, 0;
192, 256;
256, 512;
128, 384;
0, 512;
64, 256]
transPos =
[0.0001950264, -5.7220459e-05;
128, -27.710777;
255.99991, -0.00023269653;
337.67929, 279.34125;
255.99979, 512;
127.99988, 570.5177;
-0.00029873848, 511.99994;
-45.164845, -0.20605469]
so it is transforming the points, but not in the right direction.
While giving the right values when source and destination are switched in the estimateTransformation call (but then the image is giving wrong warping):
tps->estimateTransformation(sourcePoints, targetPoints, matches);
[0, 0;
128, 0;
256, 0;
256, 256;
256, 512;
128, 512;
0, 512;
0, 128]
targetPoints =
[0, 0;
128, 128;
256, 0;
192, 256;
256, 512;
128, 384;
0, 512;
64, 256]
transPos =
[-4.7683716e-05, -0.00067138672;
128.00008, 127.99954;
256.00012, 0;
192.00012, 256.00049;
255.99988, 512.00049;
127.9995, 383.99976;
-0.00016021729, 512.00049;
64.000031, 255.99982]
Input:
Output:
I just dont know why I had to switch source and target points in the estimateTransformation call. Initially it has shown the opposite behaviour from what I expected...
Sourcecode base is taken from: https://github.com/opencv/opencv/issues/7084
Solution 2:[2]
noticed my own error. The array was acidentally set to be a np.int32 when it should've been np.float32. Changing it to np.float32 fixed everything. THank you all for your feedback!
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | |
| Solution 2 | ksuresh |


