2016-05-26 66 views
1

我想将现有的openCV应用程序包含到使用Qt创建的GUI中。我发现在计算器将现有的OpenCV应用程序包含到Qt GUI中

一些类似的问题

QT How to embed an application into QT widget

Run another executable in my Qt app

的问题是,我不希望像我可以QProcess中简单地推出OpenCV的应用。 OpenCV应用程序有一个“MouseListener”,所以如果我点击窗口,它应该仍然调用openCV应用程序的功能。此外,我想在Qt GUI的标签中显示检测到的坐标。因此它必须是某种交互。

我已阅读关于createwindowContainer函数(http://blog.qt.io/blog/2013/02/19/introducing-qwidgetcreatewindowcontainer/),但由于我不是很熟悉Qt,我不确定这是否是正确的选择以及如何使用它。

我使用Linux Mint的17.2,OpenCV的版本3.1.0和Qt版本4.8.6

感谢您的输入

+0

在新项目中使用您的cv代码的问题在哪里? – Micka

+0

然后我必须调整它到Qt接口。例如,当我对图像进行鼠标点击时,我必须实现QMouseEvents等等。如果我只是在窗口内显示旧的opencv应用程序,则鼠标点击仍然会在我的原始应用程序内处理。 –

+0

不知道这个函数是否仍然存在,但在过去你可以调用cvGetWindowHandle来获得一个win api窗口句柄。可能你可以在Qt中嵌入那个。 – Micka

回答

0

我没有真正解决我如何想在一开始的问题。但现在它正在工作。如果有人有同样的问题,也许我的解决方案可以提供一些想法。如果你想用qt显示视频,或者如果OpenCV库有问题,也许我可以提供帮助。

以下是一些代码片段。他们没有太多的评论,但我希望这个概念是清楚的:

首先,我有一个标签,我升级到我的CustomLabel类型的主窗口。 CustomLabel是我的容器,用于显示视频并对我的鼠标输入作出反应。

CustomLabel::CustomLabel(QWidget* parent) : QLabel(parent), currentImage(NULL), 
tickrate_ms(33), vid_fps(0), video_width(0), video_height(0), myTimer(NULL), cap(NULL) 
{ 
// init variables 
showPoints = true; 
calculatedCenter = cv::Point(0,0); 
oldCenter = cv::Point(0,0); 
currentState = STATE_NO_STREAM; 
NOF_corners = 30; //default init value 
termcrit = cv::TermCriteria(cv::TermCriteria::COUNT | cv::TermCriteria::EPS, 30,0.01); 
// enable mouse Tracking 
this->setMouseTracking(true); 
// connect signals with slots 
QObject::connect(getMainWindow(), SIGNAL(sendFileOpen()), this, SLOT(onOpenClick())); 
QObject::connect(getMainWindow(), SIGNAL(sendWebcamOpen()), this, SLOT(onWebcamBtnOpen())); 
QObject::connect(getMainWindow(), SIGNAL(closeVideoStreamSignal()), this, SLOT(onCloseVideoStream())); 
} 

你必须覆盖的paintEvent法:

void CustomLabel::paintEvent(QPaintEvent *e){ 
QPainter painter(this); 

// When no image is loaded, paint the window black 
if (!currentImage){ 
    painter.fillRect(QRectF(QPoint(0, 0), QSize(width(), height())), Qt::black); 
    QWidget::paintEvent(e); 
    return; 
} 

// Draw a frame from the video 
drawVideoFrame(painter); 

QWidget::paintEvent(e); 
} 

方法被调用的paintEvent:

void CustomLabel::drawVideoFrame(QPainter &painter){ 
painter.drawImage(QRectF(QPoint(0, 0), QSize(width(), height())), *currentImage, 
QRectF(QPoint(0, 0), currentImage->size())); 
} 

而且我定时器的每一个滴答我打电话onTick()

void CustomLabel::onTick() { 
/* This method is called every couple of milliseconds. 
* It reads from the OpenCV's capture interface and saves a frame as QImage 
* the state machine is implemented here. every tick is handled 
*/ 
if(cap->isOpened()){ 
    switch(currentState) { 
    case STATE_IDLE: 
     if (!cap->read(currentFrame)){ 
      qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_IDLE"; 
     } 
     break; 
    case STATE_DRAWING: 
     if (!cap->read(currentFrame)){ 
      qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_DRAWING"; 
     } 
     currentFrame.copyTo(currentCopy); 
     cv::circle(currentCopy, cv::Point(focusPt.x*xScale, focusPt.y*yScale), 
sqrt((focusPt.x - currentMousePos.x())*(focusPt.x - currentMousePos.x())*xScale*xScale+(focusPt.y - currentMousePos.y())* 
(focusPt.y - currentMousePos.y())*yScale*yScale), cv::Scalar(0, 0, 255), 2, 8, 0); 
     //qDebug() << "focus pt x " << focusPt.x << "y " << focusPt.y; 
     break; 
    case STATE_TRACKING: 
     if (!cap->read(currentFrame)){ 
      qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_TRACKING"; 
     } 
     cv::cvtColor(currentFrame, currentFrame, CV_BGR2GRAY, 0); 
     if(initGrayFrame){ 
      currentGrayFrame.copyTo(previousGrayFrame); 
      initGrayFrame = false; 
      return; 
     } 
     cv::calcOpticalFlowPyrLK(previousGrayFrame, currentFrame, previousPts, currentPts, featuresFound, err, cv::Size(21, 21), 
           3, termcrit, 0, 1e-4); 
     AcquireNewPoints(); 
     currentCopy = CalculateCenter(currentFrame, currentPts); 
     if(showPoints){ 
      DrawPoints(currentCopy, currentPts); 
     } 
     break; 
    case STATE_LOST_POLE: 
     currentState = STATE_IDLE; 
     initGrayFrame = true; 
     cv::cvtColor(currentFrame, currentFrame, CV_GRAY2BGR); 
     break; 
    default: 
     break; 
    } 
    // if not tracking, draw currentFrame 
    // OpenCV uses BGR order, convert it to RGB 
    if(currentState == STATE_IDLE) { 
     cv::cvtColor(currentFrame, currentFrame, CV_BGR2RGB); 
     memcpy(currentImage->scanLine(0), (unsigned char*)currentFrame.data, currentImage->width() * currentImage->height() * currentFrame.channels()); 
    } else { 
     cv::cvtColor(currentCopy, currentCopy, CV_BGR2RGB); 
     memcpy(currentImage->scanLine(0), (unsigned char*)currentCopy.data, currentImage->width() * currentImage->height() * currentCopy.channels()); 
     previousGrayFrame = currentFrame; 
     previousPts = currentPts; 
    } 
} 
// Trigger paint event to redraw the window 
update(); 
} 

不介意yScale和xScale因子,它们仅用于opencv绘图函数,因为customLabel大小与视频分辨率不一样

0

OpenCV仅用于图像处理。如果您知道将cv :: Mat转换为任何其他需要的格式,则可以将OpenCV与任何GUI开发工具包一起使用。对于Qt,你可以将cv :: Mat转换为QImage,然后在Qt SDK的任何地方使用它。这个例子展示了OpenCV和Qt的集成,包括线程和摄像头访问。使用OpenCV访问摄像头,并将接收到的cv :: Mat转换为QImage并渲染到QLabel上。 https://github.com/nickdademo/qt-opencv-multithreaded 该代码包含显示从cv :: Mat到QImage的转换的MatToQImage()函数。整合非常简单,因为所有东西都在C++中。

+0

谢谢你的回答。我已经解决了我的问题,但没有线程。你可以在上面看到我的解决方案的一部分;) –

相关问题