2014-02-13 39 views
6

我正在使用AVFoundation构建应用程序。AVFoundation:将文本添加到CMSampleBufferRef视频帧

就在我拨打[assetWriterInput appendSampleBuffer:sampleBuffer]之前 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection -method。

我操纵样本缓冲区中的像素(使用像素缓冲区来应用效果)。

但客户希望我在框架中输入文本(时间戳& framecounter),但我还没有找到一种方法来完成此操作。

我试图samplebuffer转换为图像,在图像上应用文本和图像转换回samplebuffer,但随后

CMSampleBufferDataIsReady(sampleBuffer) 

失败。

这里是我的UIImage类方法:

+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
    { 
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    CVPixelBufferLockBaseAddress(imageBuffer,0); 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 


    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 

    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 

    CGContextRelease(newContext); 
    CGColorSpaceRelease(colorSpace); 

    UIImage *newUIImage = [UIImage imageWithCGImage:newImage]; 

    CFRelease(newImage); 

    return newUIImage; 
    } 

而且

- (CMSampleBufferRef) cmSampleBuffer 
    { 
     CGImageRef image = self.CGImage; 

     NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: 
           [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, 
           [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, 
           nil]; 
     CVPixelBufferRef pxbuffer = NULL; 

     CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, 
               self.size.width, 
               self.size.height, 
               kCVPixelFormatType_32ARGB, 
               (__bridge CFDictionaryRef) options, 
               &pxbuffer); 
     NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL); 

     CVPixelBufferLockBaseAddress(pxbuffer, 0); 
     void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer); 
     NSParameterAssert(pxdata != NULL); 

     CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); 
     CGContextRef context = CGBitmapContextCreate(pxdata, self.size.width, 
                self.size.height, 8, 4*self.size.width, rgbColorSpace, 
                kCGImageAlphaNoneSkipFirst); 
     NSParameterAssert(context); 
     CGContextConcatCTM(context, CGAffineTransformMakeRotation(0)); 
     CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), 
               CGImageGetHeight(image)), image); 
     CGColorSpaceRelease(rgbColorSpace); 
     CGContextRelease(context); 
     CVPixelBufferUnlockBaseAddress(pxbuffer, 0); 
     CMVideoFormatDescriptionRef videoInfo = NULL; 
     CMSampleBufferRef sampleBuffer = NULL; 
     CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, 
              pxbuffer, true, NULL, NULL, videoInfo, NULL, &sampleBuffer); 
     return sampleBuffer; 
    } 

任何想法?

编辑:

我改变了我的代码与托尼的回答。 (谢谢!) 此代码:

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    CVPixelBufferLockBaseAddress(pixelBuffer, 0); 

    EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; 
    CIContext *ciContext = [CIContext contextWithEAGLContext:eaglContext options:@{kCIContextWorkingColorSpace : [NSNull null]} ]; 

    UIFont *font = [UIFont fontWithName:@"Helvetica" size:40]; 
    NSDictionary *attributes = @{NSFontAttributeName: font, 
           NSForegroundColorAttributeName: [UIColor lightTextColor]}; 

    UIImage *img = [UIImage imageFromText:@"01 - 13/02/2014 15:18:21:654" withAttributes:attributes]; 
    CIImage *filteredImage = [[CIImage alloc] initWithCGImage:img.CGImage]; 

    [ciContext render:filteredImage toCVPixelBuffer:pixelBuffer bounds:[filteredImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()]; 

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); 
+2

实物,但你能不能把你的共享源的UIImage * IMG = [UIImage的imageFromText:@“01 - 13/02/2014 15:18: 21:654“withAttributes:attributes]; – chrisallick

+0

@chrisallick请看这里:https://stackoverflow.com/questions/2765537/how-do-i-use-the-nsstring-draw-functionality-to-create-a-uiimage-from-text –

+0

你发现了吗在CMSampleBuffer上添加文本的解决方案? – user924

回答

2

你应该从苹果指的CIFunHouse样品,你可以使用这个API直接绘制到缓冲区

-(void)render:(CIImage *)image toCVPixelBuffer:(CVPixelBufferRef)buffer bounds:(CGRect)r colorSpace:(CGColorSpaceRef)cs

你可以在这里下载WWDC2013

创建环境

_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; 
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:@{kCIContextWorkingColorSpace : [NSNull null]} ]; 

现在呈现图像的随机

CVPixelBufferRef renderedOutputPixelBuffer = NULL; 
OSStatus err = CVPixelBufferPoolCreatePixelBuffer(nil, self.pixelBufferAdaptor.pixelBufferPool, &renderedOutputPixelBuffer); 
[_ciContext render:filteredImage toCVPixelBuffer:renderedOutputPixelBuffer bounds:[filteredImage extent] 
+0

这可以很好地工作,只有透明图像在黑盒子里。任何想法为什么? :) – JoriDor

+0

@JoriDor你找出它为什么是黑色的? –

+0

链接不工作,Swift中的任何示例? – user924