2016-07-26 115 views
0

我想要做的黄金是将矩阵从images1左侧改为右侧。从我所知道的是,我们不能用基本的转化方法来改变。如何将图像从不规则矩形更改为矩形?

image1. change matrix from left to right

真正的问题是,我有以下图像中的矩形。我需要将不规则矩形更改为常规矩形。

image2.

+0

这也许可以帮助你:[链接](http://stackoverflow.com/questions/7838487/executing-cvwarpperspective-for-a-fake-deskewing-on-a-set-of-cvpoint) – s1h

+0

@ s1h非常感谢你。 [链接](http://stackoverflow.com/questions/7838487/executing-cvwarpperspective-for-a-fake-deskewing-on-a-set-of-cvpoint)是我的答案。 –

回答

1

所以,第一个问题是为了角落。它们必须在两个向量中都是相同的顺序。因此,如果在第一个向量中,您的顺序是:(左上角,左下角,右下角,右上角),它们必须在另一个向量中处于相同的顺序。

其次,要生成的图像只包含感兴趣的对象,您必须将其宽度和高度设置为与生成的矩形宽度和高度相同。别担心,warpPerspective中的src和dst图片可能大小不同。

三,性能问题。虽然你的方法是绝对准确的,因为你只用仿射变换(旋转,调整大小,去偏斜),在数学上,你可以使用函数的仿射函数。他们要快得多。

getAffineTransform() warpAffine().

重要提示:getAffine变换的需要和期望只有3分,结果矩阵是2×3,而不是3×3。

如何使结果图像有不同的大小比输入:

cv::warpPerspective(src, dst, dst.size(), ...); 
use 

cv::Mat rotated; 
cv::Size size(box.boundingRect().width, box.boundingRect().height); 
cv::warpPerspective(src, dst, size, ...); 

所以你在这里,和你的编程任务已经结束了。

void main() 
{ 
    cv::Mat src = cv::imread("r8fmh.jpg", 1); 


    // After some magical procedure, these are points detect that represent 
    // the corners of the paper in the picture: 
    // [408, 69] [72, 2186] [1584, 2426] [1912, 291] 

    vector<Point> not_a_rect_shape; 
    not_a_rect_shape.push_back(Point(408, 69)); 
    not_a_rect_shape.push_back(Point(72, 2186)); 
    not_a_rect_shape.push_back(Point(1584, 2426)); 
    not_a_rect_shape.push_back(Point(1912, 291)); 

    // For debugging purposes, draw green lines connecting those points 
    // and save it on disk 
    const Point* point = &not_a_rect_shape[0]; 
    int n = (int)not_a_rect_shape.size(); 
    Mat draw = src.clone(); 
    polylines(draw, &point, &n, 1, true, Scalar(0, 255, 0), 3, CV_AA); 
    imwrite("draw.jpg", draw); 

    // Assemble a rotated rectangle out of that info 
    RotatedRect box = minAreaRect(cv::Mat(not_a_rect_shape)); 
    std::cout << "Rotated box set to (" << box.boundingRect().x << "," << box.boundingRect().y << ") " << box.size.width << "x" << box.size.height << std::endl; 

    Point2f pts[4]; 

    box.points(pts); 

    // Does the order of the points matter? I assume they do NOT. 
    // But if it does, is there an easy way to identify and order 
    // them as topLeft, topRight, bottomRight, bottomLeft? 

    cv::Point2f src_vertices[3]; 
    src_vertices[0] = pts[0]; 
    src_vertices[1] = pts[1]; 
    src_vertices[2] = pts[3]; 
    //src_vertices[3] = not_a_rect_shape[3]; 

    Point2f dst_vertices[3]; 
    dst_vertices[0] = Point(0, 0); 
    dst_vertices[1] = Point(box.boundingRect().width-1, 0); 
    dst_vertices[2] = Point(0, box.boundingRect().height-1); 

    Mat warpAffineMatrix = getAffineTransform(src_vertices, dst_vertices); 

    cv::Mat rotated; 
    cv::Size size(box.boundingRect().width, box.boundingRect().height); 
    warpAffine(src, rotated, warpAffineMatrix, size, INTER_LINEAR, BORDER_CONSTANT); 

    imwrite("rotated.jpg", rotated); 
} 
+0

查看http://stackoverflow.com/a/37381666/5294258 – sturkmen