2012-06-26 27 views
2

我构建了一个iOS应用程序,它执行一些基本检测。 我从AVCaptureVideoDataOutput获取原始帧,将CMSampleBufferRef转换为UIImage,调整UIImage的大小,然后将其转换为CVPixelBufferRef。 就我所能检测到的仪器而言,泄漏是我将CGImage转换为CVPixelBufferRef的最后一部分。CoreImage/CoreVideo中的内存泄漏

这是我使用的代码:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{ 
    videof = [[ASMotionDetect alloc] initWithSampleImage:[self resizeSampleBuffer:sampleBuffer]]; 
    // ASMotionDetect is my class for detection and I use videof to calculate the movement 
} 

-(UIImage*)resizeSampleBuffer:(CMSampleBufferRef) sampleBuffer { 
    UIImage *img; 
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CVPixelBufferLockBaseAddress(imageBuffer,0);  // Lock the image buffer 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 
    CGContextRelease(newContext); 

    CGColorSpaceRelease(colorSpace); 
    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
    /* CVBufferRelease(imageBuffer); */ // do not call this! 

    img = [UIImage imageWithCGImage:newImage]; 
    CGImageRelease(newImage); 
    newContext = nil; 
    img = [self resizeImageToSquare:img]; 
    return img; 
} 

-(UIImage*)resizeImageToSquare:(UIImage*)_temp { 
    UIImage *img; 
    int w = _temp.size.width; 
    int h = _temp.size.height; 
    CGRect rect; 
    if (w>h) { 
     rect = CGRectMake((w-h)/2,0,h,h); 
    } else { 
     rect = CGRectMake(0, (h-w)/2, w, w); 
    } 
    // 
    img = [self crop:_temp inRect:rect]; 
    return img; 
} 

-(UIImage*) crop:(UIImage*)image inRect:(CGRect)rect{ 
    UIImage *sourceImage = image; 
    CGRect selectionRect = rect; 
    CGRect transformedRect = TransformCGRectForUIImageOrientation(selectionRect, sourceImage.imageOrientation, sourceImage.size); 
    CGImageRef resultImageRef = CGImageCreateWithImageInRect(sourceImage.CGImage, transformedRect); 
    UIImage *resultImage = [[UIImage alloc] initWithCGImage:resultImageRef scale:1.0 orientation:image.imageOrientation]; 
    CGImageRelease(resultImageRef); 
    return resultImage; 
} 

而在我的检测I类有:

- (id)initWithSampleImage:(UIImage*)sampleImage { 
    if ((self = [super init])) { 
    _frame = new CVMatOpaque(); 
    _histograms = new CVMatNDOpaque[kGridSize * 
            kGridSize]; 
    [self extractFrameFromImage:sampleImage]; 
    } 
    return self; 
} 

- (void)extractFrameFromImage:(UIImage*)sampleImage { 
    CGImageRef imageRef = [sampleImage CGImage]; 
    CVImageBufferRef imageBuffer = [self pixelBufferFromCGImage:imageRef]; 
    CVPixelBufferLockBaseAddress(imageBuffer, 0); 
    // Collect some information required to extract the frame. 
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 

    // Extract the frame, convert it to grayscale, and shove it in _frame. 
    cv::Mat frame(height, width, CV_8UC4, baseAddress, bytesPerRow); 
    cv::cvtColor(frame, frame, CV_BGR2GRAY); 
    _frame->matrix = frame; 
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0); 
    CGImageRelease(imageRef); 
} 

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image 
{ 
    CVPixelBufferRef pxbuffer = NULL; 
    int width = CGImageGetWidth(image)*2; 
    int height = CGImageGetHeight(image)*2; 

    NSMutableDictionary *attributes = [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil]; 
    CVPixelBufferPoolRef pixelBufferPool; 
    CVReturn theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &pixelBufferPool); 
    NSParameterAssert(theError == kCVReturnSuccess); 
    CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &pxbuffer); 
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL); 

    CVPixelBufferLockBaseAddress(pxbuffer, 0); 
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer); 
    NSParameterAssert(pxdata != NULL); 
    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef context = CGBitmapContextCreate(pxdata, width, 
               height, 8, width*4, rgbColorSpace, 
               kCGImageAlphaNoneSkipFirst); 
    NSParameterAssert(context); 
/* here is the problem: */ 
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), image); 
    CGColorSpaceRelease(rgbColorSpace); 
    CGContextRelease(context); 

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0); 

    return pxbuffer; 
} 

通过仪器我发现这个问题是CVPixelBufferRef的拨款,但我不明白为什么 - 有人可以看到问题?

谢谢

回答

1

-pixelBufferFromCGImage:,既pxBufferpixelBufferPool不被释放。这对于pxBuffer是有意义的,因为它是一个返回值,但不适用于pixelBufferPool - 您每次调用该方法时都会创建并泄漏一个。

速战速决应该是

  1. pixelBufferPool释放的-pixelBufferFromCGImage:
  2. 发布pxBuffer(的-pixelBufferFromCGImage:返回值)-extractFrameFromImage:

还应该重命名-pixelBufferFromCGImage:-createPixelBufferFromCGImage:作出明确它返回一个保留的对象。

+0

我仍然在CGContextDrawImage(context,CGRectMake(0,0,width,height),image)得到一个错误 – tagyro

+0

@AndreiStoleru你可以用当前版本更新代码,并提供有关泄漏的细节,即哪些对象类型现在被泄露? –

+0

我现在使用Apple提供的示例代码将CMSampleBufferRef转换为UIImage,但生成的图像的方向有误(请访问http://stackoverflow.com/q/11246726/401087) – tagyro