2011-02-17 75 views
71

我使用assetUrl从iPod库中读取歌曲(在代码中,它名为audioUrl) 我可以用多种方式播放它,我可以剪切它,我可以用这个进行一些处理但是... 我真的不用不明白我要用这个CMSampleBufferRef来获取绘制波形的数据!我需要关于峰值的信息,我怎么能得到它(也许是另一种)?AVAssetReader绘图波形

AVAssetTrack * songTrack = [audioUrl.tracks objectAtIndex:0]; 
    AVAssetReaderTrackOutput * output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:nil]; 
    [reader addOutput:output]; 
    [output release]; 

    NSMutableData * fullSongData = [[NSMutableData alloc] init]; 
    [reader startReading]; 

    while (reader.status == AVAssetReaderStatusReading){ 

     AVAssetReaderTrackOutput * trackOutput = 
     (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0]; 

     CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer]; 

     if (sampleBufferRef){/* what I gonna do with this? */} 

请帮帮我!

回答

5

您应该能够从您的sampleBuffRef获得音频的缓冲区,然后通过这些值重复,以建立自己的波形:

CMBlockBufferRef buffer = CMSampleBufferGetDataBuffer(sampleBufferRef); 
CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples(sampleBufferRef); 
AudioBufferList audioBufferList; 
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
                  sampleBufferRef, 
                  NULL, 
                  &audioBufferList, 
                  sizeof(audioBufferList), 
                  NULL, 
                  NULL, 
                    kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, 
                  &buffer 
                  ); 

// this copies your audio out to a temp buffer but you should be able to iterate through this buffer instead 
SInt32* readBuffer = (SInt32 *)malloc(numSamplesInBuffer * sizeof(SInt32)); 
memcpy(readBuffer, audioBufferList.mBuffers[0].mData, numSamplesInBuffer*sizeof(SInt32)); 
241

我正在寻找类似的事情,决定“推出自己的。 “ 我意识到这是一个旧帖子,但如果其他人在寻找这个,这里是我的解决方案。它比较快速和肮脏,并将图像标准化为“全尺寸”。 它创建的图像是“宽”,即你需要把它们放在一个UIScrollView或以其他方式管理显示。

这是基于给予this question

样本输出

sample waveform

编辑一些答案:我已经添加了平均的对数版本和渲染方法,请参阅结束此消息的替代版本为&比较输出。我个人更喜欢原始的线性版本,但决定发布它,以防有人可以改进所使用的算法。

你需要这些进口:

#import <MediaPlayer/MediaPlayer.h> 
#import <AVFoundation/AVFoundation.h> 

首先,通用渲染方法,需要一个指针平均样本数据,
并返回一个UIImage。请注意,这些样本不是可播放的音频样本。

-(UIImage *) audioImageGraph:(SInt16 *) samples 
       normalizeMax:(SInt16) normalizeMax 
       sampleCount:(NSInteger) sampleCount 
       channelCount:(NSInteger) channelCount 
       imageHeight:(float) imageHeight { 

    CGSize imageSize = CGSizeMake(sampleCount, imageHeight); 
    UIGraphicsBeginImageContext(imageSize); 
    CGContextRef context = UIGraphicsGetCurrentContext(); 

    CGContextSetFillColorWithColor(context, [UIColor blackColor].CGColor); 
    CGContextSetAlpha(context,1.0); 
    CGRect rect; 
    rect.size = imageSize; 
    rect.origin.x = 0; 
    rect.origin.y = 0; 

    CGColorRef leftcolor = [[UIColor whiteColor] CGColor]; 
    CGColorRef rightcolor = [[UIColor redColor] CGColor]; 

    CGContextFillRect(context, rect); 

    CGContextSetLineWidth(context, 1.0); 

    float halfGraphHeight = (imageHeight/2)/(float) channelCount ; 
    float centerLeft = halfGraphHeight; 
    float centerRight = (halfGraphHeight*3) ; 
    float sampleAdjustmentFactor = (imageHeight/ (float) channelCount)/(float) normalizeMax; 

    for (NSInteger intSample = 0 ; intSample < sampleCount ; intSample ++) { 
     SInt16 left = *samples++; 
     float pixels = (float) left; 
     pixels *= sampleAdjustmentFactor; 
     CGContextMoveToPoint(context, intSample, centerLeft-pixels); 
     CGContextAddLineToPoint(context, intSample, centerLeft+pixels); 
     CGContextSetStrokeColorWithColor(context, leftcolor); 
     CGContextStrokePath(context); 

     if (channelCount==2) { 
      SInt16 right = *samples++; 
      float pixels = (float) right; 
      pixels *= sampleAdjustmentFactor; 
      CGContextMoveToPoint(context, intSample, centerRight - pixels); 
      CGContextAddLineToPoint(context, intSample, centerRight + pixels); 
      CGContextSetStrokeColorWithColor(context, rightcolor); 
      CGContextStrokePath(context); 
     } 
    } 

    // Create new image 
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); 

    // Tidy up 
    UIGraphicsEndImageContext(); 

    return newImage; 
} 

接下来,需要一个AVURLAsset,并返回PNG图像数据的方法

- (NSData *) renderPNGAudioPictogramForAsset:(AVURLAsset *)songAsset { 

    NSError * error = nil; 
    AVAssetReader * reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error]; 
    AVAssetTrack * songTrack = [songAsset.tracks objectAtIndex:0]; 

    NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys: 
             [NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey, 
             //  [NSNumber numberWithInt:44100.0],AVSampleRateKey, /*Not Supported*/ 
             //  [NSNumber numberWithInt: 2],AVNumberOfChannelsKey, /*Not Supported*/ 
             [NSNumber numberWithInt:16],AVLinearPCMBitDepthKey, 
             [NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey, 
             [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey, 
             [NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved, 
             nil]; 

    AVAssetReaderTrackOutput* output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict]; 

    [reader addOutput:output]; 
    [output release]; 

    UInt32 sampleRate,channelCount; 

    NSArray* formatDesc = songTrack.formatDescriptions; 
    for(unsigned int i = 0; i < [formatDesc count]; ++i) { 
     CMAudioFormatDescriptionRef item = (CMAudioFormatDescriptionRef)[formatDesc objectAtIndex:i]; 
     const AudioStreamBasicDescription* fmtDesc = CMAudioFormatDescriptionGetStreamBasicDescription (item); 
     if(fmtDesc) { 

      sampleRate = fmtDesc->mSampleRate; 
      channelCount = fmtDesc->mChannelsPerFrame; 

      // NSLog(@"channels:%u, bytes/packet: %u, sampleRate %f",fmtDesc->mChannelsPerFrame, fmtDesc->mBytesPerPacket,fmtDesc->mSampleRate); 
     } 
    } 

    UInt32 bytesPerSample = 2 * channelCount; 
    SInt16 normalizeMax = 0; 

    NSMutableData * fullSongData = [[NSMutableData alloc] init]; 
    [reader startReading]; 

    UInt64 totalBytes = 0;   
    SInt64 totalLeft = 0; 
    SInt64 totalRight = 0; 
    NSInteger sampleTally = 0; 

    NSInteger samplesPerPixel = sampleRate/50; 

    while (reader.status == AVAssetReaderStatusReading){ 

     AVAssetReaderTrackOutput * trackOutput = (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0]; 
     CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer]; 

     if (sampleBufferRef){ 
      CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBufferRef); 

      size_t length = CMBlockBufferGetDataLength(blockBufferRef); 
      totalBytes += length; 

      NSAutoreleasePool *wader = [[NSAutoreleasePool alloc] init]; 

      NSMutableData * data = [NSMutableData dataWithLength:length]; 
      CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, data.mutableBytes); 

      SInt16 * samples = (SInt16 *) data.mutableBytes; 
      int sampleCount = length/bytesPerSample; 
      for (int i = 0; i < sampleCount ; i ++) { 

       SInt16 left = *samples++; 
       totalLeft += left; 

       SInt16 right; 
       if (channelCount==2) { 
        right = *samples++; 
        totalRight += right; 
       } 

       sampleTally++; 

       if (sampleTally > samplesPerPixel) { 

        left = totalLeft/sampleTally; 

        SInt16 fix = abs(left); 
        if (fix > normalizeMax) { 
         normalizeMax = fix; 
        } 

        [fullSongData appendBytes:&left length:sizeof(left)]; 

        if (channelCount==2) { 
         right = totalRight/sampleTally; 

         SInt16 fix = abs(right); 
         if (fix > normalizeMax) { 
          normalizeMax = fix; 
         } 

         [fullSongData appendBytes:&right length:sizeof(right)]; 
        } 

        totalLeft = 0; 
        totalRight = 0; 
        sampleTally = 0; 
       } 
      } 

      [wader drain]; 

      CMSampleBufferInvalidate(sampleBufferRef); 
      CFRelease(sampleBufferRef); 
     } 
    } 

    NSData * finalData = nil; 

    if (reader.status == AVAssetReaderStatusFailed || reader.status == AVAssetReaderStatusUnknown){ 
     // Something went wrong. return nil 

     return nil; 
    } 

    if (reader.status == AVAssetReaderStatusCompleted){ 

     NSLog(@"rendering output graphics using normalizeMax %d",normalizeMax); 

     UIImage *test = [self audioImageGraph:(SInt16 *) 
         fullSongData.bytes 
           normalizeMax:normalizeMax 
            sampleCount:fullSongData.length/4 
           channelCount:2 
            imageHeight:100]; 

     finalData = imageToData(test); 
    }   

    [fullSongData release]; 
    [reader release]; 

    return finalData; 
} 

高级选项: 最后,如果你希望能够发挥音频使用AVAudioPlayer,您需要缓存 它到您的应用程序的包缓存文件夹。自从我这样做以后,我决定缓存图像数据 ,并将整个事件包装成UIImage类别。您需要包含this open source offering来提取音频,以及here中的一些代码来处理一些后台线程功能。

第一,一些定义,以及一些通用类的方法来处理路径名称等

//#define imgExt @"jpg" 
//#define imageToData(x) UIImageJPEGRepresentation(x,4) 

#define imgExt @"png" 
#define imageToData(x) UIImagePNGRepresentation(x) 

+ (NSString *) assetCacheFolder { 
    NSArray *assetFolderRoot = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); 
    return [NSString stringWithFormat:@"%@/audio", [assetFolderRoot objectAtIndex:0]]; 
} 

+ (NSString *) cachedAudioPictogramPathForMPMediaItem:(MPMediaItem*) item { 
    NSString *assetFolder = [[self class] assetCacheFolder]; 
    NSNumber * libraryId = [item valueForProperty:MPMediaItemPropertyPersistentID]; 
    NSString *assetPictogramFilename = [NSString stringWithFormat:@"asset_%@.%@",libraryId,imgExt]; 
    return [NSString stringWithFormat:@"%@/%@", assetFolder, assetPictogramFilename]; 
} 

+ (NSString *) cachedAudioFilepathForMPMediaItem:(MPMediaItem*) item { 
    NSString *assetFolder = [[self class] assetCacheFolder]; 

    NSURL * assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL]; 
    NSNumber * libraryId = [item valueForProperty:MPMediaItemPropertyPersistentID]; 

    NSString *assetFileExt = [[[assetURL path] lastPathComponent] pathExtension]; 
    NSString *assetFilename = [NSString stringWithFormat:@"asset_%@.%@",libraryId,assetFileExt]; 
    return [NSString stringWithFormat:@"%@/%@", assetFolder, assetFilename]; 
} 

+ (NSURL *) cachedAudioURLForMPMediaItem:(MPMediaItem*) item { 
    NSString *assetFilepath = [[self class] cachedAudioFilepathForMPMediaItem:item]; 
    return [NSURL fileURLWithPath:assetFilepath]; 
} 

现在做 “业务”

- (id) initWithMPMediaItem:(MPMediaItem*) item 
      completionBlock:(void (^)(UIImage* delayedImagePreparation))completionBlock { 

    NSFileManager *fman = [NSFileManager defaultManager]; 
    NSString *assetPictogramFilepath = [[self class] cachedAudioPictogramPathForMPMediaItem:item]; 

    if ([fman fileExistsAtPath:assetPictogramFilepath]) { 

     NSLog(@"Returning cached waveform pictogram: %@",[assetPictogramFilepath lastPathComponent]); 

     self = [self initWithContentsOfFile:assetPictogramFilepath]; 
     return self; 
    } 

    NSString *assetFilepath = [[self class] cachedAudioFilepathForMPMediaItem:item]; 

    NSURL *assetFileURL = [NSURL fileURLWithPath:assetFilepath]; 

    if ([fman fileExistsAtPath:assetFilepath]) { 

     NSLog(@"scanning cached audio data to create UIImage file: %@",[assetFilepath lastPathComponent]); 

     [assetFileURL retain]; 
     [assetPictogramFilepath retain]; 

     [NSThread MCSM_performBlockInBackground: ^{ 

      AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:assetFileURL options:nil]; 
      NSData *waveFormData = [self renderPNGAudioPictogramForAsset:asset]; 

      [waveFormData writeToFile:assetPictogramFilepath atomically:YES]; 

      [assetFileURL release]; 
      [assetPictogramFilepath release]; 

      if (completionBlock) { 

       [waveFormData retain]; 
       [NSThread MCSM_performBlockOnMainThread:^{ 

        UIImage *result = [UIImage imageWithData:waveFormData]; 

        NSLog(@"returning rendered pictogram on main thread (%d bytes %@ data in UIImage %0.0f x %0.0f pixels)",waveFormData.length,[imgExt uppercaseString],result.size.width,result.size.height); 

        completionBlock(result); 

        [waveFormData release]; 
       }]; 
      } 
     }]; 

     return nil; 

    } else { 

     NSString *assetFolder = [[self class] assetCacheFolder]; 

     [fman createDirectoryAtPath:assetFolder withIntermediateDirectories:YES attributes:nil error:nil]; 

     NSLog(@"Preparing to import audio asset data %@",[assetFilepath lastPathComponent]); 

     [assetPictogramFilepath retain]; 
     [assetFileURL retain]; 

     TSLibraryImport* import = [[TSLibraryImport alloc] init]; 
     NSURL * assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL]; 

     [import importAsset:assetURL toURL:assetFileURL completionBlock:^(TSLibraryImport* import) { 
      //check the status and error properties of 
      //TSLibraryImport 

      if (import.error) { 

       NSLog (@"audio data import failed:%@",import.error); 

      } else{ 
       NSLog (@"Creating waveform pictogram file: %@", [assetPictogramFilepath lastPathComponent]); 
       AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:assetFileURL options:nil]; 
       NSData *waveFormData = [self renderPNGAudioPictogramForAsset:asset]; 

       [waveFormData writeToFile:assetPictogramFilepath atomically:YES]; 

       if (completionBlock) { 
        [waveFormData retain]; 
        [NSThread MCSM_performBlockOnMainThread:^{ 

         UIImage *result = [UIImage imageWithData:waveFormData]; 
         NSLog(@"returning rendered pictogram on main thread (%d bytes %@ data in UIImage %0.0f x %0.0f pixels)",waveFormData.length,[imgExt uppercaseString],result.size.width,result.size.height); 

         completionBlock(result); 

         [waveFormData release]; 
        }]; 
       } 
      } 

      [assetPictogramFilepath release]; 
      [assetFileURL release]; 

     } ]; 

     return nil; 
    } 
} 
init方法

调用示例这样的:

-(void) importMediaItem { 

    MPMediaItem* item = [self mediaItem]; 

    // since we will be needing this for playback, save the url to the cached audio. 
    [url release]; 
    url = [[UIImage cachedAudioURLForMPMediaItem:item] retain]; 

    [waveFormImage release]; 

    waveFormImage = [[UIImage alloc ] initWithMPMediaItem:item completionBlock:^(UIImage* delayedImagePreparation){ 

     waveFormImage = [delayedImagePreparation retain]; 
     [self displayWaveFormImage]; 
    }]; 

    if (waveFormImage) { 
     [waveFormImage retain]; 
     [self displayWaveFormImage]; 
    } 
} 

平均的对数版本和渲染方法

#define absX(x) (x<0?0-x:x) 
#define minMaxX(x,mn,mx) (x<=mn?mn:(x>=mx?mx:x)) 
#define noiseFloor (-90.0) 
#define decibel(amplitude) (20.0 * log10(absX(amplitude)/32767.0)) 

-(UIImage *) audioImageLogGraph:(Float32 *) samples 
       normalizeMax:(Float32) normalizeMax 
       sampleCount:(NSInteger) sampleCount 
       channelCount:(NSInteger) channelCount 
       imageHeight:(float) imageHeight { 

    CGSize imageSize = CGSizeMake(sampleCount, imageHeight); 
    UIGraphicsBeginImageContext(imageSize); 
    CGContextRef context = UIGraphicsGetCurrentContext(); 

    CGContextSetFillColorWithColor(context, [UIColor blackColor].CGColor); 
    CGContextSetAlpha(context,1.0); 
    CGRect rect; 
    rect.size = imageSize; 
    rect.origin.x = 0; 
    rect.origin.y = 0; 

    CGColorRef leftcolor = [[UIColor whiteColor] CGColor]; 
    CGColorRef rightcolor = [[UIColor redColor] CGColor]; 

    CGContextFillRect(context, rect); 

    CGContextSetLineWidth(context, 1.0); 

    float halfGraphHeight = (imageHeight/2)/(float) channelCount ; 
    float centerLeft = halfGraphHeight; 
    float centerRight = (halfGraphHeight*3) ; 
    float sampleAdjustmentFactor = (imageHeight/ (float) channelCount)/(normalizeMax - noiseFloor)/2; 

    for (NSInteger intSample = 0 ; intSample < sampleCount ; intSample ++) { 
     Float32 left = *samples++; 
     float pixels = (left - noiseFloor) * sampleAdjustmentFactor; 
     CGContextMoveToPoint(context, intSample, centerLeft-pixels); 
     CGContextAddLineToPoint(context, intSample, centerLeft+pixels); 
     CGContextSetStrokeColorWithColor(context, leftcolor); 
     CGContextStrokePath(context); 

     if (channelCount==2) { 
      Float32 right = *samples++; 
      float pixels = (right - noiseFloor) * sampleAdjustmentFactor; 
      CGContextMoveToPoint(context, intSample, centerRight - pixels); 
      CGContextAddLineToPoint(context, intSample, centerRight + pixels); 
      CGContextSetStrokeColorWithColor(context, rightcolor); 
      CGContextStrokePath(context); 
     } 
    } 

    // Create new image 
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); 

    // Tidy up 
    UIGraphicsEndImageContext(); 

    return newImage; 
} 

- (NSData *) renderPNGAudioPictogramLogForAsset:(AVURLAsset *)songAsset { 

    NSError * error = nil; 
    AVAssetReader * reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error]; 
    AVAssetTrack * songTrack = [songAsset.tracks objectAtIndex:0]; 

    NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys: 
             [NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey, 
             //  [NSNumber numberWithInt:44100.0],AVSampleRateKey, /*Not Supported*/ 
             //  [NSNumber numberWithInt: 2],AVNumberOfChannelsKey, /*Not Supported*/ 

             [NSNumber numberWithInt:16],AVLinearPCMBitDepthKey, 
             [NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey, 
             [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey, 
             [NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved, 
             nil]; 

    AVAssetReaderTrackOutput* output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict]; 

    [reader addOutput:output]; 
    [output release]; 

    UInt32 sampleRate,channelCount; 

    NSArray* formatDesc = songTrack.formatDescriptions; 
    for(unsigned int i = 0; i < [formatDesc count]; ++i) { 
     CMAudioFormatDescriptionRef item = (CMAudioFormatDescriptionRef)[formatDesc objectAtIndex:i]; 
     const AudioStreamBasicDescription* fmtDesc = CMAudioFormatDescriptionGetStreamBasicDescription (item); 
     if(fmtDesc) { 

      sampleRate = fmtDesc->mSampleRate; 
      channelCount = fmtDesc->mChannelsPerFrame; 

      // NSLog(@"channels:%u, bytes/packet: %u, sampleRate %f",fmtDesc->mChannelsPerFrame, fmtDesc->mBytesPerPacket,fmtDesc->mSampleRate); 
     } 
    } 

    UInt32 bytesPerSample = 2 * channelCount; 
    Float32 normalizeMax = noiseFloor; 
    NSLog(@"normalizeMax = %f",normalizeMax); 
    NSMutableData * fullSongData = [[NSMutableData alloc] init]; 
    [reader startReading]; 

    UInt64 totalBytes = 0; 
    Float64 totalLeft = 0; 
    Float64 totalRight = 0; 
    Float32 sampleTally = 0; 

    NSInteger samplesPerPixel = sampleRate/50; 

    while (reader.status == AVAssetReaderStatusReading){ 

     AVAssetReaderTrackOutput * trackOutput = (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0]; 
     CMSampleBufferRef sampleBufferRef = [trackOutput copyNextSampleBuffer]; 

     if (sampleBufferRef){ 
      CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBufferRef); 

      size_t length = CMBlockBufferGetDataLength(blockBufferRef); 
      totalBytes += length; 

      NSAutoreleasePool *wader = [[NSAutoreleasePool alloc] init]; 

      NSMutableData * data = [NSMutableData dataWithLength:length]; 
      CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, data.mutableBytes); 

      SInt16 * samples = (SInt16 *) data.mutableBytes; 
      int sampleCount = length/bytesPerSample; 
      for (int i = 0; i < sampleCount ; i ++) { 

       Float32 left = (Float32) *samples++; 
       left = decibel(left); 
       left = minMaxX(left,noiseFloor,0); 
       totalLeft += left; 

       Float32 right; 
       if (channelCount==2) { 
        right = (Float32) *samples++; 
        right = decibel(right); 
        right = minMaxX(right,noiseFloor,0); 
        totalRight += right; 
       } 

       sampleTally++; 

       if (sampleTally > samplesPerPixel) { 

        left = totalLeft/sampleTally; 
        if (left > normalizeMax) { 
         normalizeMax = left; 
        } 

        // NSLog(@"left average = %f, normalizeMax = %f",left,normalizeMax); 

        [fullSongData appendBytes:&left length:sizeof(left)]; 

        if (channelCount==2) { 
         right = totalRight/sampleTally; 

         if (right > normalizeMax) { 
          normalizeMax = right; 
         } 

         [fullSongData appendBytes:&right length:sizeof(right)]; 
        } 

        totalLeft = 0; 
        totalRight = 0; 
        sampleTally = 0; 
       } 
      } 

      [wader drain]; 

      CMSampleBufferInvalidate(sampleBufferRef); 
      CFRelease(sampleBufferRef); 
     } 
    } 

    NSData * finalData = nil; 

    if (reader.status == AVAssetReaderStatusFailed || reader.status == AVAssetReaderStatusUnknown){ 
     // Something went wrong. Handle it. 
    } 

    if (reader.status == AVAssetReaderStatusCompleted){ 
     // You're done. It worked. 

     NSLog(@"rendering output graphics using normalizeMax %f",normalizeMax); 

     UIImage *test = [self audioImageLogGraph:(Float32 *) fullSongData.bytes 
           normalizeMax:normalizeMax 
            sampleCount:fullSongData.length/(sizeof(Float32) * 2) 
           channelCount:2 
            imageHeight:100]; 

     finalData = imageToData(test); 
    } 

    [fullSongData release]; 
    [reader release]; 

    return finalData; 
} 

比较输出

Linear
线性情节通过的 “暖起来” 的开始Acme Swing公司

logarithmic
对数坐标为“暖起来”由Acme的摇摆公司

+20

这是一个非常完整和有用的答案的开始。这实际上是一个边框教程,你可以考虑把它放在博客或其他类似的地方。如果可以的话,我会投你10票。 – 2011-10-26 02:26:03