我一直在试图在MonoTouch中进行一些实时视频图像处理。我正在使用AVCaptureSession从AVCaptureVideoPreviewLayer工作的相机中获取帧。如何使用AVCaptureSession从CMSampleBuffer获取UIImage
我也成功地在我的委托类中获得回调方法“DidOutputSampleBuffer”。但是,我试图从生成的CMSampleBuffer创建UIImage的每种方式都失败。
这里是我的代码设置捕获会话:
captureSession = new AVCaptureSession();
captureSession.BeginConfiguration();
videoCamera = AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.Video);
if (videoCamera != null)
{
captureSession.SessionPreset = AVCaptureSession.Preset1280x720;
videoInput = AVCaptureDeviceInput.FromDevice (videoCamera);
if (videoInput != null)
captureSession.AddInput (videoInput);
//DispatchQueue queue = new DispatchQueue ("videoFrameQueue");
videoCapDelegate = new videoOutputDelegate (this);
DispatchQueue queue = new DispatchQueue("videoFrameQueue");
videoOutput = new AVCaptureVideoDataOutput();
videoOutput.SetSampleBufferDelegateAndQueue (videoCapDelegate, queue);
videoOutput.AlwaysDiscardsLateVideoFrames = true;
videoOutput.VideoSettings.PixelFormat = CVPixelFormatType.CV24RGB;
captureSession.AddOutput (videoOutput);
videoOutput.ConnectionFromMediaType(AVMediaType.Video).VideoOrientation = AVCaptureVideoOrientation.Portrait;
previewLayer = AVCaptureVideoPreviewLayer.FromSession (captureSession);
previewLayer.Frame = UIScreen.MainScreen.Bounds;
previewLayer.AffineTransform = CGAffineTransform.MakeRotation (Convert.DegToRad (-90));
//this.View.Layer.AddSublayer (previewLayer);
captureSession.CommitConfiguration();
captureSession.StartRunning();
}
我试图创建一个CGBitmapContext从CVPixelBuffer从样本缓冲区图像缓冲区铸像这样:
public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, MonoTouch.CoreMedia.CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
{
CVPixelBuffer pixelBuffer = sampleBuffer.GetImageBuffer() as CVPixelBuffer;
CVReturn flag = pixelBuffer.Lock (0);
if(flag == CVReturn.Success)
{
CGBitmapContext context = new CGBitmapContext
(
pixelBuffer.BaseAddress,
pixelBuffer.Width,
pixelBuffer.Height,
8,
pixelBuffer.BytesPerRow,
CGColorSpace.CreateDeviceRGB(),
CGImageAlphaInfo.PremultipliedFirst
);
UIImage image = new UIImage(context.ToImage());
ProcessImage (image);
pixelBuffer.Unlock(0);
}else
Debug.Print(flag.ToString()
sampleBuffer.Dispose();
}
这导致出现以下错误
<Error>: CGBitmapContextCreate: invalid data bytes/row: should be at least 2880 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedFirst.
即使对某些参数进行了调整ers我在本机objective-c中得到无效的Handle异常或段错误。
我也试图简单地创建与CVImageBuffer一个CIImage并从像创建一个UIImage这样:
public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, MonoTouch.CoreMedia.CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
{
CIImage cImage = new CIImage(sampleBuffer.GetImageBuffer());
UIImage image = new UIImage(cImage);
ProcessImage (image);
sampleBuffer.Dispose();
}
初始化CIImage时,这将导致一个例外:
NSInvalidArgumentException Reason: -[CIImage initWithCVImageBuffer:]: unrecognized selector sent to instance 0xc821d0
这老老实实感觉就像是MonoTouch的某种错误,但是如果我错过了某些东西,或者只是试图以一种奇怪的方式来做到这一点,请让我知道一些替代解决方案。
感谢
究竟是哪个数字值调用CGBitmapContext构造函数? –
使用iPhone 5的pixelBuffer具有以下值: 宽度:720 高度:每行1280个 字节:1084 –
你想创建一个CGBitmapContext的一个CVPixelBuffer具有不同的格式。每行1084个字节和720个宽度给每个像素超过1.5个字节。这不是RGB24,它可能是一种平面格式,那么CVPixelBuffer的像素格式是什么? –