0
我想让我的IP摄像头流在我的Qt Widget应用程序中。首先,我连接到IP摄像机的UDP端口。 IP摄像机正在流式传输H.264编码视频。在socket被绑定之后,在每个readyRead()信号上,我正在用接收到的数据报填充缓冲区以获得全帧。Qt - 使用FFmpeg库的H.264视频流
变量初始化:
AVCodec *codec;
AVCodecContext *codecCtx;
AVFrame *frame;
AVPacket packet;
this->buffer.clear();
this->socket = new QUdpSocket(this);
QObject::connect(this->socket, &QUdpSocket::connected, this, &H264VideoStreamer::connected);
QObject::connect(this->socket, &QUdpSocket::disconnected, this, &H264VideoStreamer::disconnected);
QObject::connect(this->socket, &QUdpSocket::readyRead, this, &H264VideoStreamer::readyRead);
QObject::connect(this->socket, &QUdpSocket::hostFound, this, &H264VideoStreamer::hostFound);
QObject::connect(this->socket, SIGNAL(error(QAbstractSocket::SocketError)), this, SLOT(error(QAbstractSocket::SocketError)));
QObject::connect(this->socket, &QUdpSocket::stateChanged, this, &H264VideoStreamer::stateChanged);
avcodec_register_all();
codec = avcodec_find_decoder(AV_CODEC_ID_H264);
if (!codec){
qDebug() << "Codec not found";
return;
}
codecCtx = avcodec_alloc_context3(codec);
if (!codecCtx){
qDebug() << "Could not allocate video codec context";
return;
}
if (codec->capabilities & CODEC_CAP_TRUNCATED)
codecCtx->flags |= CODEC_FLAG_TRUNCATED;
codecCtx->flags2 |= CODEC_FLAG2_CHUNKS;
AVDictionary *dictionary = nullptr;
if (avcodec_open2(codecCtx, codec, &dictionary) < 0) {
qDebug() << "Could not open codec";
return;
}
算法如下:
void H264VideoImageProvider::readyRead() {
QByteArray datagram;
datagram.resize(this->socket->pendingDatagramSize());
QHostAddress sender;
quint16 senderPort;
this->socket->readDatagram(datagram.data(), datagram.size(), &sender, &senderPort);
QByteArray rtpHeader = datagram.left(12);
datagram.remove(0, 12);
int nal_unit_type = datagram[0] & 0x1F;
bool start = (datagram[1] & 0x80) != 0;
int seqNo = rtpHeader[3] & 0xFF;
qDebug() << "H264 video decoder::readyRead()"
<< "from: " << sender.toString() << ":" << QString::number(senderPort)
<< "\n\tDatagram size: " << QString::number(datagram.size())
<< "\n\tH264 RTP header (hex): " << rtpHeader.toHex()
<< "\n\tH264 VIDEO data (hex): " << datagram.toHex();
qDebug() << "nal_unit_type = " << nal_unit_type << " - " << getNalUnitTypeStr(nal_unit_type);
if (start)
qDebug() << "START";
if (nal_unit_type == 7){
this->sps = datagram;
qDebug() << "Sequence parameter found = " << this->sps.toHex();
return;
} else if (nal_unit_type == 8){
this->pps = datagram;
qDebug() << "Picture parameter found = " << this->pps.toHex();
return;
}
//VIDEO_FRAME
if (start){
if (!this->buffer.isEmpty())
decodeBuf();
this->buffer.clear();
qDebug() << "Initializing new buffer...";
this->buffer.append(char(0x00));
this->buffer.append(char(0x00));
this->buffer.append(char(0x00));
this->buffer.append(char(0x01));
this->buffer.append(this->sps);
this->buffer.append(char(0x00));
this->buffer.append(char(0x00));
this->buffer.append(char(0x00));
this->buffer.append(char(0x01));
this->buffer.append(this->pps);
this->buffer.append(char(0x00));
this->buffer.append(char(0x00));
this->buffer.append(char(0x00));
this->buffer.append(char(0x01));
}
qDebug() << "Appending buffer data...";
this->buffer.append(datagram);
}
- 前12个字节的数据报是RTP首部
- 其他一切是视频数据
- 最后5位第一个VIDEO DATA字节,表示它是哪个NAL单元类型。我总是得到以下4个值之一(1编码非IDR片,5编码IDR片,7 SPS,8 PPS)
- 第2个VIDEO DATA字节中的第5位说明该数据包是否为帧中的START数据
- 所有视频数据被存储在缓冲器开始START
- 一旦新帧到达 - START被设置时,它被解码并生成 新缓冲器被这样产生
帧进行解码:
SPS
PPS
级联VIDEO DATA
解码使用avcodec_decode_video2()函数从FFmpeg的文库制成
void H264VideoStreamer::decode() { av_init_packet(&packet); av_new_packet(&packet, this->buffer.size()); memcpy(packet.data, this->buffer.data_ptr(), this->buffer.size()); packet.size = this->buffer.size(); frame = av_frame_alloc(); if(!frame){ qDebug() << "Could not allocate video frame"; return; } int got_frame = 1; int len = avcodec_decode_video2(codecCtx, frame, &got_frame, &packet); if (len < 0){ qDebug() << "Error while encoding frame."; return; } //if(got_frame > 0){ // got_frame is always 0 // qDebug() << "Data decoded: " << frame->data[0]; //} char * frameData = (char *) frame->data[0]; QByteArray decodedFrame; decodedFrame.setRawData(frameData, len); qDebug() << "Data decoded: " << decodedFrame; av_frame_unref(frame); av_free_packet(&packet); emit imageReceived(decodedFrame); }
我的想法是在接收imageReceived信号的UI线程中,直接在QImage中转换decodeFrame,并在新帧解码并发送到UI后刷新它。
这是解码H.264流的好方法吗?我面临以下问题:
- avcodec_decode_video2()返回与编码缓冲区大小相同的值。编码和解码日期是否可能总是相同的大小?
- got_frame始终为0,所以这意味着我从来没有真正收到全帧结果。可能是什么原因?视频帧错误地创建?或视频帧错误地从QByteArray转换为AVframe?
- 如何将解码后的AVframe转换回QByteArray,并且它可以简单地转换为QImage?
感谢您的建议,但我想坚持FFmpeg库。是否有可能使用libvlc获得udp流? – franz
我不确定。我想你可以根据我粘贴的链接中的评论。你可以检查你自己,如果你打开VLC客户端,并进入媒体 - >开放网络流并粘贴你的链接。如果流开始,那么你也可以用libvlc来做。 –
是的,这是有道理的,因为VLC基于libVLC。那么,非常感谢迄今为止的答案,如果我没有设法使用FFmpeg进行流式处理,仍然在等待答案,那么这将是我的备份计划。 – franz