我有一个应用程序读入一个原始视频文件,对每一帧进行一些图像处理,然后将生成的BGRA格式字节[]帧到FFMPEG容器,最终创建一个AVI文件。由于这个过程与我所见过的其他FFMPEG示例略有不同,因此它没有现有的输入文件,所以我想知道是否有人知道如何做到这一点。FFMPEG创建内部管道添加原始帧到AVI文件(没有输入文件)
我初始化FFMPEG容器:
ProcessBuilder pBuilder = new ProcessBuilder(raid.getLocation()
+ "\\ffmpeg\\bin\\ffmpeg.exe", "-r", "30", "-vcodec",
"rawvideo", "-f", "rawvideo", "-pix_fmt", "bgra", "-s",
size, "-i", "pipe:0", "-r", "30", "-y", "-c:v", "libx264",
"C:\export\2015-02-03\1500\EXPORT6.avi");
try
{
process = pBuilder.start();
}
catch (IOException e)
{
e.printStackTrace();
}
ffmpegInput = process.getOutputStream();
对于每个输入字节[]数组帧,我的帧添加到容器(“SRC”是我要转换为一个字节数组一个BufferedImage) :
try
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(src, ".png", baos);
ffmpegInput.write(baos.toByteArray());
}
catch (IOException e)
{
e.printStackTrace();
}
而且一旦视频完成加载框架,我封闭容器:
try
{
ffmpegInput.flush();
ffmpegInput.close();
}
catch (IOException e)
{
e.printStackTrace();
}
AVI文件已创建,但打开时会显示错误。 FFMPEG记录器显示此错误:
ffmpeg version N-71102-g1f5d1ee Copyright (c) 2000-2015 the FFmpeg developers built with gcc 4.9.2 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-lzma --enable-decklink --enable-zlib
libavutil 54. 20.101/54. 20.101
libavcodec 56. 30.100/56. 30.100
libavformat 56. 26.101/56. 26.101
libavdevice 56. 4.100/56. 4.100
libavfilter 5. 13.101/5. 13.101
libswscale 3. 1.101/3. 1.101
libswresample 1. 1.100/1. 1.100
libpostproc 53. 3.100/53. 3.100
Input #0, rawvideo, from 'pipe:0':
Duration: N/A, bitrate: 294912 kb/s
Stream #0:0: Video: rawvideo (BGRA/0x41524742), bgra, 640x480, 294912 kb/s, 30 tbr, 30 tbn, 30 tbc
No pixel format specified, yuv444p for H.264 encoding chosen.
Use -pix_fmt yuv420p for compatibility with outdated media players.
[libx264 @ 00000000003bcbe0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
[libx264 @ 00000000003bcbe0] profile High 4:4:4 Predictive, level 3.0, 4:4:4 8-bit
Output #0, avi, to 'C:\export\2015-02-03\1500\EXPORT6.avi':
Metadata:
ISFT : Lavf56.26.101
Stream #0:0: Video: h264 (libx264) (H264/0x34363248), yuv444p, 640x480, q=-1--1, 30 fps, 30 tbn, 30 tbc
Metadata:
encoder : Lavc56.30.100 libx264
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
frame= 0 fps=0.0 q=0.0 Lsize= 6kB time=00:00:00.00 bitrate=N/A
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Output file is empty, nothing was encoded (check -ss/-t/-frames parameters if used)
任何见解或想法将不胜感激!
第二个'-r'(应用于输出的)不是必需的。输出将继承输入帧速率。 [rawvideo demuxer](http://ffmpeg.org/ffmpeg-formats.html#rawvideo)具有您应该使用的特定选项:'-framerate'而不是'-r','-video_size'而不是' - 虽然我不知道它是否有所作为(不同于图像文件分路器,它可能导致不同的行为)。 – LordNeckbeard 2015-04-02 20:22:05