我遇到了一个问题,在其YUV缓冲区上经过一些处理后渲染摄像机图像。使用OpenCV图像处理在Project Tango上渲染问题
我使用的例子视频叠加JNI的例子和方法OnFrameAvailable
我创建使用cv::Mat
一个新的帧缓冲......
这是我如何创建一个新的帧缓冲:
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_/2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
过程后,我复制frame.data
到yuv_temp_buffer_
,以使其在质地:memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
这工作正常...
当我尝试执行OpenCV方法findChessboardCorners
...使用我以前创建的框架时,问题开始。
方法findChessboardCorners
需要大约90ms才能执行(11 fps),但是,它似乎在渲染速度较慢。 (它在屏幕上呈现为〜0.5 fps)。
这里是OnFrameAvailable
方法的代码:
void AugmentedRealityApp::OnFrameAvailable(const TangoImageBuffer* buffer) {
if (yuv_drawable_ == NULL){
return;
}
if (yuv_drawable_->GetTextureId() == 0) {
LOGE("AugmentedRealityApp::yuv texture id not valid");
return;
}
if (buffer->format != TANGO_HAL_PIXEL_FORMAT_YCrCb_420_SP) {
LOGE("AugmentedRealityApp::yuv texture format is not supported by this app");
return;
}
// The memory needs to be allocated after we get the first frame because we
// need to know the size of the image.
if (!is_yuv_texture_available_) {
yuv_width_ = buffer->width;
yuv_height_ = buffer->height;
uv_buffer_offset_ = yuv_width_ * yuv_height_;
yuv_size_ = yuv_width_ * yuv_height_ + yuv_width_ * yuv_height_/2;
// Reserve and resize the buffer size for RGB and YUV data.
yuv_buffer_.resize(yuv_size_);
yuv_temp_buffer_.resize(yuv_size_);
rgb_buffer_.resize(yuv_width_ * yuv_height_ * 3);
AllocateTexture(yuv_drawable_->GetTextureId(), yuv_width_, yuv_height_);
is_yuv_texture_available_ = true;
}
std::lock_guard<std::mutex> lock(yuv_buffer_mutex_);
memcpy(&yuv_temp_buffer_[0], buffer->data, yuv_size_);
///
cv::Mat frame((int) yuv_height_ + (int) (yuv_height_/2), (int) yuv_width_, CV_8UC1, (uchar *) yuv_temp_buffer_.data());
if (!stam.isCalibrated()) {
Profiler profiler;
profiler.startSampling();
stam.initFromChessboard(frame, cv::Size(9, 6), 100);
profiler.endSampling();
profiler.print("initFromChessboard", -1);
}
///
memcpy(&yuv_temp_buffer_[0], frame.data, yuv_size_);
swap_buffer_signal_ = true;
}
这里是方法initFromChessBoard
的代码:
bool STAM::initFromChessboard(const cv::Mat& image, const cv::Size& chessBoardSize, int squareSize)
{
cv::Mat rvec = cv::Mat(cv::Size(3, 1), CV_64F);
cv::Mat tvec = cv::Mat(cv::Size(3, 1), CV_64F);
std::vector<cv::Point2d> imagePoints, imageBoardPoints;
std::vector<cv::Point3d> boardPoints;
for (int i = 0; i < chessBoardSize.height; i++)
{
for (int j = 0; j < chessBoardSize.width; j++)
{
boardPoints.push_back(cv::Point3d(j*squareSize, i*squareSize, 0.0));
}
}
//getting only the Y channel (many of the functions like face detect and align only needs the grayscale image)
cv::Mat gray(image.rows, image.cols, CV_8UC1);
gray.data = image.data;
bool found = findChessboardCorners(gray, chessBoardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);
#ifdef WINDOWS_VS
printf("Number of chessboard points: %d\n", imagePoints.size());
#elif ANDROID
LOGE("Number of chessboard points: %d", imagePoints.size());
#endif
for (int i = 0; i < imagePoints.size(); i++) {
cv::circle(image, imagePoints[i], 6, cv::Scalar(149, 43, 0), -1);
}
}
是具有处理一些在YUV缓冲区后,同样的问题,任何人在纹理上渲染?
我使用其他设备进行了测试,而不是使用camera2 API的项目Tango,并且屏幕上的渲染过程与OpenCV函数过程本身的渲染过程看起来是相同的。
我很感激任何帮助。
非常感谢你@bashbug !!!你的回答非常有帮助。我还没有意识到tango_support库来操纵帧缓冲区。现在它工作正常! –
我很高兴它帮助!请接受我的回答;) – bashbug
你好@bashbug,你知道如何改变探戈相机的分辨率吗?我的意思是,目前我得到了1280 * 720,我想要获得640 * 360。 –