2014-01-20 26 views
2

创建AudioBuffer /音频我在流应用初学者,我从AudioBuffer创建的NSData和我发送NSData的客户端(接收器)。但我不知道如何将NSdata转换为音频缓冲区。如何从NSData的

我使用下面的代码AudioBuffer转换为NSData的(这是工作好)

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{    
AudioStreamBasicDescription audioFormat; 
memset(&audioFormat, 0, sizeof(audioFormat)); 
audioFormat.mSampleRate = 8000.0; 
audioFormat.mFormatID = kAudioFormatiLBC; 
audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked | kAudioFormatFlagIsAlignedHigh; 
audioFormat.mFramesPerPacket = 1; 
audioFormat.mChannelsPerFrame = 1; 
audioFormat.mBitsPerChannel = 16; 
audioFormat.mBytesPerPacket = 2; 
audioFormat.mReserved = 0; 
audioFormat.mBytesPerFrame = audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame* sizeof(SInt16); 

AudioBufferList audioBufferList; 
NSMutableData *data=[[NSMutableData alloc] init]; 
CMBlockBufferRef blockBuffer; 
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); 
    for(int y=0; y<audioBufferList.mNumberBuffers; y++) 
    { 
     AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; 
     Float32 *frame = (Float32*)audioBuffer.mData; 
     [data appendBytes:frame length:audioBuffer.mDataByteSize]; 
    } 
} 

如果这不是正确的方法,那么请帮我....感谢。

+0

嘿,你能做到吗?如果是这样,请发布您的解决方案。谢谢,我正在努力解决同样的问题。 – moenad

+0

@Sojan - 你能够以某种方式将数据转换回CMSampleBufferRef吗?或者,您能否引导一些适合您的资源/方法? –

回答

0

这是我用我的音频数据(音频文件)转换成浮点表示并保存到一个代码array.firstly我得到的音频数据转换成AudioBufferList然后获取音频数据的浮点值。检查下面的代码,如果它帮助

-(void) PrintFloatDataFromAudioFile { 

NSString * name = @"Filename"; //YOUR FILE NAME 
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:@"m4a"]; // SPECIFY YOUR FILE FORMAT 

const char *cString = [source cStringUsingEncoding:NSASCIIStringEncoding]; 

CFStringRef str = CFStringCreateWithCString(
              NULL, 
              cString, 
              kCFStringEncodingMacRoman 
              ); 
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(
                 kCFAllocatorDefault, 
                 str, 
                 kCFURLPOSIXPathStyle, 
                 false 
                ); 

ExtAudioFileRef fileRef; 
ExtAudioFileOpenURL(inputFileURL, &fileRef); 


    AudioStreamBasicDescription audioFormat; 
audioFormat.mSampleRate = 44100; // GIVE YOUR SAMPLING RATE 
audioFormat.mFormatID = kAudioFormatLinearPCM; 
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat; 
audioFormat.mBitsPerChannel = sizeof(Float32) * 8; 
audioFormat.mChannelsPerFrame = 1; // Mono 
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame * sizeof(Float32); // == sizeof(Float32) 
audioFormat.mFramesPerPacket = 1; 
audioFormat.mBytesPerPacket = audioFormat.mFramesPerPacket * audioFormat.mBytesPerFrame; // = sizeof(Float32) 

// 3) Apply audio format to the Extended Audio File 
ExtAudioFileSetProperty(
         fileRef, 
         kExtAudioFileProperty_ClientDataFormat, 
         sizeof (AudioStreamBasicDescription), //= audioFormat 
         &audioFormat); 

int numSamples = 1024; //How many samples to read in at a time 
UInt32 sizePerPacket = audioFormat.mBytesPerPacket; // = sizeof(Float32) = 32bytes 
UInt32 packetsPerBuffer = numSamples; 
UInt32 outputBufferSize = packetsPerBuffer * sizePerPacket; 

// So the lvalue of outputBuffer is the memory location where we have reserved space 
UInt8 *outputBuffer = (UInt8 *)malloc(sizeof(UInt8 *) * outputBufferSize); 



AudioBufferList convertedData ;//= malloc(sizeof(convertedData)); 

convertedData.mNumberBuffers = 1; // Set this to 1 for mono 
convertedData.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerFrame; //also = 1 
convertedData.mBuffers[0].mDataByteSize = outputBufferSize; 
convertedData.mBuffers[0].mData = outputBuffer; // 

UInt32 frameCount = numSamples; 
float *samplesAsCArray; 
int j =0; 
    double floatDataArray[882000] ; // SPECIFY YOUR DATA LIMIT MINE WAS 882000 , SHOULD BE EQUAL TO OR MORE THAN DATA LIMIT 

while (frameCount > 0) { 
    ExtAudioFileRead(
        fileRef, 
        &frameCount, 
        &convertedData 
        ); 
    if (frameCount > 0) { 
     AudioBuffer audioBuffer = convertedData.mBuffers[0]; 
     samplesAsCArray = (float *)audioBuffer.mData; // CAST YOUR mData INTO FLOAT 

     for (int i =0; i<1024 /*numSamples */; i++) { //YOU CAN PUT numSamples INTEAD OF 1024 

      floatDataArray[j] = (double)samplesAsCArray[i] ; //PUT YOUR DATA INTO FLOAT ARRAY 
       printf("\n%f",floatDataArray[j]); //PRINT YOUR ARRAY'S DATA IN FLOAT FORM RANGING -1 TO +1 
      j++; 


     } 
    } 
}} 
+0

THX你的答案 –

+0

AudioBuffer到NSData的正常工作。但我的问题是如何的NSData转换为音频/播放音频 –

0

我用下面的代码片段,以NSData的转换(在我的800个字节的数据包的情况,但可以说可以是任何尺寸)到AudioBufferList:

-(AudioBufferList *) getBufferListFromData: (NSData *) data 
{ 
     if (data.length > 0) 
     { 
      NSUInteger len = [data length]; 
      //I guess you can use Byte*, void* or Float32*. I am not sure if that makes any difference. 
      Byte * byteData = (Byte*) malloc (len); 
      memcpy (byteData, [data bytes], len); 
      if (byteData) 
      { 
       AudioBufferList * theDataBuffer =(AudioBufferList*)malloc(sizeof(AudioBufferList) * 1); 
       theDataBuffer->mNumberBuffers = 1; 
       theDataBuffer->mBuffers[0].mDataByteSize = len; 
       theDataBuffer->mBuffers[0].mNumberChannels = 1; 
       theDataBuffer->mBuffers[0].mData = byteData; 
       // Read the data into an AudioBufferList 
       return theDataBuffer; 
      } 
     } 
     return nil; 
} 
+0

那你怎么玩AudioBufferList?或将其转换回音频缓冲区? –

+0

我有播放音频但语音不清除我如何管理这些? –

4

您可以使用下面的代码创建从CMSampleBufferRefNSData,然后用AVAudioPlayer玩。

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { 

    AudioBufferList audioBufferList; 
    NSMutableData *data= [NSMutableData data]; 
    CMBlockBufferRef blockBuffer; 
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); 

    for(int y=0; y< audioBufferList.mNumberBuffers; y++){ 

     AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; 
     Float32 *frame = (Float32*)audioBuffer.mData; 

     [data appendBytes:frame length:audioBuffer.mDataByteSize]; 

    } 

    CFRelease(blockBuffer); 
    CFRelease(ref); 

    AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:data error:nil]; 
    [player play]; 
} 
+0

它是如何工作的任何人?带有以上数据的AVAudioPlayer返回nil,并显示以下错误: '错误=错误域= NSOSStatusErrorDomain代码= 1954115647“操作无法完成(OSStatus错误1954115647。)”' 您有修复吗? –

+0

我有同样的错误:错误域= NSOSStatusErrorDomain代码= 1954115647“(空)”试图初始化播放器时。有时它能够初始化播放器,但是我听不到任何声音。你有提示吗? – Vincenzo

2

这就是我如何做到的,以防其他人陷入同一问题。您不需要从AudioBufferList中获取数据,而是按原样使用它。为了再次从NSData中创建AudioBufferList,我还需要一些样本信息,所以我在实际数据之前添加了它。

下面是如何将数据从CMSampleBufferRef的:

AudioBufferList audioBufferList; 
CMBlockBufferRef blockBuffer; 
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); 
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);   
NSUInteger size = sizeof(audioBufferList); 
char buffer[size + 4]; 
((int*)buffer)[0] = (int)numSamples; 
memcpy(buffer +4, &audioBufferList, size); 
//This is the Audio data. 
NSData *bufferData = [NSData dataWithBytes:buffer length:size + 4]; 

这是你如何创建AudioSampleBufferRef出这个数据:

const void *buffer = [bufferData bytes]; 
buffer = (char *)buffer; 

CMSampleBufferRef sampleBuffer = NULL; 
OSStatus status = -1; 

/* Format Description */ 
AudioStreamBasicDescription audioFormat; 
audioFormat.mSampleRate = 44100.00; 
audioFormat.mFormatID = kAudioFormatLinearPCM; 
audioFormat.mFormatFlags = 0xc; 
audioFormat.mBytesPerPacket= 2; 
audioFormat.mFramesPerPacket= 1; 
audioFormat.mBytesPerFrame= 2; 
audioFormat.mChannelsPerFrame= 1; 
audioFormat.mBitsPerChannel= 16; 
audioFormat.mReserved= 0; 

CMFormatDescriptionRef format = NULL; 
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &format); 

CMFormatDescriptionRef formatdes = NULL; 
status = CMFormatDescriptionCreate(NULL, kCMMediaType_Audio, 'lpcm', NULL, &formatdes); 
if (status != noErr) 
{ 
    NSLog(@"Error in CMAudioFormatDescriptionCreater"); 
    return; 
} 

/* Create sample Buffer */ 
CMSampleTimingInfo timing = {.duration= CMTimeMake(1, 44100), .presentationTimeStamp= kCMTimeZero, .decodeTimeStamp= kCMTimeInvalid}; 
CMItemCount framesCount  = ((int*)buffer)[0]; 

status = CMSampleBufferCreate(kCFAllocatorDefault, nil , NO,nil,nil,format, framesCount, 1, &timing, 0, nil, &sampleBuffer); 

if(status != noErr) 
{ 
    NSLog(@"Error in CMSampleBufferCreate"); 
    return; 
} 

/* Copy BufferList to Sample Buffer */ 
AudioBufferList receivedAudioBufferList; 
memcpy(&receivedAudioBufferList, buffer + 4, sizeof(receivedAudioBufferList)); 

status = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer, kCFAllocatorDefault , kCFAllocatorDefault, 0, &receivedAudioBufferList); 
if (status != noErr) { 
    NSLog(@"Error in CMSampleBufferSetDataBufferFromAudioBufferList"); 
    return; 
} 
//Use your sampleBuffer. 

让我知道的任何问题。

+0

为什么'size + 4'?你为什么追加4? –

+0

我想我正在使用前4个字节的缓冲区来传输一个int,我正在使用它来获取更多信息。 –

+0

你可以看看我的亲戚[问题](https://stackoverflow.com/questions/46908485/deep-copy-of-audio-cmsamplebuffer)与赏金启用? –