2009-06-10 77 views
7

我遇到了cvProjectPoints2函数的一些麻烦。 以下是来自O'Reilly的“学习OpenCV的”一书的功能概述:如何使用OpenCV cvProjectPoints2函数

void cvProjectPoints2(
const CvMat* object_points, 
const CvMat* rotation_vector, 
const CvMat* translation_vector, 
const CvMat* intrinsic_matrix, 
const CvMat* distortion_coeffs, 
CvMat* image_points, 
); 

的第一个参数,object_points,是你想要的投射点的列表;它只是一个包含点位置的N×3矩阵。您可以在对象自己的本地坐标系中给出这些坐标系,然后提供3乘1矩阵rotation_vector *和translation_vector来关联这两个坐标。如果在您的特定情况下更容易直接在相机坐标的工作,那么你可以只给在该系统中object_points和设置都rotation_vectortranslation_vector包含0。†

intrinsic_matrixdistortion_coeffs只是相机内部信息以及来自章节 中讨论的cvCalibrateCamera2()的畸变系数11. image_points参数是一个N×2矩阵,计算结果将写入其中。

首先,似乎有一个object_points数组的bug。如果只有一个点,即N = 1,则程序崩溃。无论如何,我有几个相机内在参数和投影矩阵。失真系数为0,即没有失真。 为简单起见,假设我有2个摄像头:

double intrinsic[2][3][3] = { 
//camera 0 
1884.190000, 0, 513.700000, 
0.0, 1887.490000, 395.609000, 
0.0, 0.0, 1.0, 
//camera 4 
1877.360000, 0.415492, 579.467000, 
0.0, 1882.430000, 409.612000, 
0.0, 0.0, 1.0 
}; 

double projection[2][3][4] = { 
//camera 0 
0.962107, -0.005824, 0.272486, -14.832727, 
0.004023, 0.999964, 0.007166, 0.093097, 
-0.272519, -0.005795, 0.962095, -0.005195, 
//camera 4 
1.000000, 0.000000, -0.000000, 0.000006, 
0.000000, 1.000000, -0.000000, 0.000001, 
-0.000000, -0.000000, 1.000000, -0.000003 
}; 

据我明白,该信息足以投射任何摄像机视图的任何点(x,Y,Z)。这里,在x,y,z坐标中,相机4的光学中心是世界坐标的原点。

这里是我的代码:

#include <cv.h> 
#include <highgui.h> 
#include <cvaux.h> 
#include <cxcore.h> 
#include <stdio.h> 

double intrinsic[2][3][3] = { 
//0 
1884.190000, 0, 513.700000, 
0.0, 1887.490000, 395.609000, 
0.0, 0.0, 1.0, 
//4 
1877.360000, 0.415492, 579.467000, 
0.0, 1882.430000, 409.612000, 
0.0, 0.0, 1.0 
}; 

double projection[2][3][4] = { 
//0 
0.962107, -0.005824, 0.272486, -14.832727, 
0.004023, 0.999964, 0.007166, 0.093097, 
-0.272519, -0.005795, 0.962095, -0.005195, 
//4 
1.000000, 0.000000, -0.000000, 0.000006, 
0.000000, 1.000000, -0.000000, 0.000001, 
-0.000000, -0.000000, 1.000000, -0.000003 
}; 


int main() { 
    CvMat* camera_matrix[2]; // 
    CvMat* rotation_matrix[2]; // 
    CvMat* dist_coeffs[2]; 
    CvMat* translation[2]; 
    IplImage* image[2]; 
    image[0] = cvLoadImage("color-cam0-f000.bmp", 1); 
    image[1] = cvLoadImage("color-cam4-f000.bmp", 1); 
    CvSize image_size; 
    image_size = cvSize(image[0]->width, image[0]->height); 

    for (int m=0; m<2; m++) { 
     camera_matrix[m] = cvCreateMat(3, 3, CV_32F); 
     dist_coeffs[m] = cvCreateMat(1, 4, CV_32F); 
     rotation_matrix[m] = cvCreateMat(3, 3, CV_32F); 
     translation[m] = cvCreateMat(3, 1, CV_32F); 
    } 

    for (int m=0; m<2; m++) { 
     for (int i=0; i<3; i++) 
      for (int j=0; j<3; j++) { 
       cvmSet(camera_matrix[m],i,j, intrinsic[m][i][j]); 
       cvmSet(rotation_matrix[m],i,j, projection[m][i][j]); 
      } 
     for (int i=0; i<4; i++) 
      cvmSet(dist_coeffs[m], 0, i, 0); 
     for (int i=0; i<3; i++) 
      cvmSet(translation[m], i, 0, projection[m][i][3]); 
    } 

    CvMat* vector = cvCreateMat(3, 1, CV_32F); 
    CvMat* object_points = cvCreateMat(10, 3, CV_32F); 
    cvmSet(object_points, 0, 0, 1000); 
    cvmSet(object_points, 0, 1, 500); 
    cvmSet(object_points, 0, 2, 100); 

    CvMat* image_points = cvCreateMat(10, 2, CV_32F); 
    int m = 0; 
    cvRodrigues2(rotation_matrix[m], vector); 
    cvProjectPoints2(object_points, vector, translation[m], camera_matrix[m], dist_coeffs[m], image_points); 
    printf("%f\n", cvmGet(image_points, 0, 0)); 
    printf("%f\n", cvmGet(image_points, 0, 1)); 
    return 0; 
} 

的图像是1024 * 768,和z的可见部分被知道是44和120那么,点应在两个摄像机可以看出之间,对?但结果是绝对错误的。即使对于m = 1。 我做错了什么?

+0

我没有太多的时间来看看你的代码,但究竟是在相机内部函数矩阵0.415492因素摄像机4?我预计这是0.0。 – yhw42 2010-03-04 18:43:30

回答

3

是的,cvProjectPoints用于投影点数组。你可以根据项目的一个点用简单的矩阵运算:

CvMat *pt = cvCreateMat(3, 1, CV_32FC1); 
CvMat *pt_rt = cvCreateMat(3, 1, CV_32FC1); 
CvMat *proj_pt = cvCreateMat(3, 1, CV_32FC1); 
cvMatMulAdd(rotMat, pt, translation, pt_rt); 
cvMatMul(intrinsic, pt_rt, proj_pt); 
// convertPointsHomogenious might be used 
float scale = (float)CV_MAT_ELEM(*proj_pt, float, 2, 0); 
float x = CV_MAT_ELEM(*proj_pt, float, 0, 0)/scale; 
float y = CV_MAT_ELEM(*proj_pt, float, 1, 0)/scale; 
CvPoint2D32f img_pt = cvPoint2D32f(x, y);