2017-06-27 119 views
0

我需要使用Objective C在iOS应用中进行音频流式传输。我使用AVFoundation框架并从麦克风捕获原始数据并发送到服务器。但是,我收到的原始数据是腐败的,下面是我的代码。音频使用iOS中的音频队列/缓冲区流式传输AVFoundation

请建议我在哪里做错了。

session = [[AVCaptureSession alloc] init]; 

NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys: 
           [NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey, 
           [NSNumber numberWithFloat:16000.0], AVSampleRateKey, 
           [NSNumber numberWithInt: 1],AVNumberOfChannelsKey, 
           [NSNumber numberWithInt:32], AVLinearPCMBitDepthKey, 
           [NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey, 
           [NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey, 
           [NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved, 
           nil]; 


AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio]; 
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil]; 
[session addInput:audioInput]; 

AVCaptureAudioDataOutput *audioDataOutput = [[AVCaptureAudioDataOutput alloc] init]; 
dispatch_queue_t audioQueue = dispatch_queue_create("AudioQueue", NULL); 
[audioDataOutput setSampleBufferDelegate:self queue:audioQueue]; 

AVAssetWriterInput *_assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:recordSettings]; 
_assetWriterVideoInput.performsMultiPassEncodingIfSupported = YES; 

if([session canAddOutput:audioDataOutput]){ 
    [session addOutput:audioDataOutput]; 
} 
[session startRunning]; 

捕捉:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{ 
    AudioBufferList audioBufferList; 
    NSMutableData *data= [NSMutableData data]; 
    CMBlockBufferRef blockBuffer; 
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); 

    for(int y=0; y< audioBufferList.mNumberBuffers; y++){ 

     AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; 
     Float32 *frame = (Float32*)audioBuffer.mData; 
     [data appendBytes:frame length:audioBuffer.mDataByteSize]; 

     NSString *base64Encoded = [data base64EncodedStringWithOptions:0]; 
     NSLog(@"Encoded: %@", base64Encoded); 

    } 

    CFRelease(blockBuffer); 
} 

回答

0

我贴你需要做这项工作的那种代码的样本。其方法与您的方法几乎相同。您应该能够轻松阅读。

该应用程序使用AudioUnit录制和播放麦克风输入和扬声器输出,NSNetServices连接网络上的两个iOS设备,以及NSStreams在设备之间发送音频流。

您可以下载源代码:

https://drive.google.com/open?id=1tKgVl0X92SYvgpvbljRzilXNQ6iBcjqM

它需要最新的Xcode 9 beta版编译,以及最新的iOS 11测试版运行它。

注意|每个方法调用和事件的日志条目显示在包含整个屏幕的文本字段中;没有交互界面 - 没有按钮等。在两个iOS设备上安装应用程序后,只需在两个设备上启动它即可自动连接到您的网络并开始流式传输音频。

enter image description here