2010-12-01 26 views
2

O.K,所以我使用核心音频从10个不同的采样源中提取音频,然后在我的回调函数中将它们混合在一起。RemoteIO音频问题 - 模拟器=好 - 设备=坏

它在模拟器中很完美,一切都很好。当我尝试在4.2 iPhone设备上运行时遇到了麻烦。

如果我在回调中混合了2个音频文件,那么一切正常。 如果我混合5或6个音频文件,音频会播放,但是在很短的时间后,音频会降低,最终没有音频进入扬声器。 (回调不停止)。

如果我尝试混合10个音频文件,回调会运行但完全没有音频出现。

这几乎就像回调用完了时间,这可能解释了我混合5或6的情况,但不能解释混合10个音频源的最后一种情况,其中根本没有音频播放。

我不确定以下是否有任何方位,但是当我正在调试时,此消息总是打印到控制台。这是否可以说明问题是什么?

mem 0x1000 0x3fffffff cache 
mem 0x40000000 0xffffffff none 
mem 0x00000000 0x0fff none 
run 
Running… 
[Switching to thread 11523] 
[Switching to thread 11523] 
Re-enabling shared library breakpoint 1 
continue 
warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1 (8C148)/Symbols/usr/lib/info/dns.so (file not found). 

**建立我的回调**

#pragma mark - 
#pragma mark Callback setup & control 

- (void) setupCallback 

{ 
    OSStatus status; 


    // Describe audio component 
    AudioComponentDescription desc; 
    desc.componentType = kAudioUnitType_Output; 
    desc.componentSubType = kAudioUnitSubType_RemoteIO; 
    desc.componentFlags = 0; 
    desc.componentFlagsMask = 0; 
    desc.componentManufacturer = kAudioUnitManufacturer_Apple; 

    // Get component 
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc); 

    // Get audio units 
    status = AudioComponentInstanceNew(inputComponent, &audioUnit); 

    UInt32 flag = 1; 
    // Enable IO for playback 
    status = AudioUnitSetProperty(audioUnit, 
            kAudioOutputUnitProperty_EnableIO, 
            kAudioUnitScope_Output, 
            kOutputBus, 
            &flag, 
            sizeof(flag)); 

    //Apply format 
    status = AudioUnitSetProperty(audioUnit, 
            kAudioUnitProperty_StreamFormat, 
            kAudioUnitScope_Input, 
            kOutputBus, 
            &stereoStreamFormat, 
            sizeof(stereoStreamFormat)); 

    // Set up the playback callback 
    AURenderCallbackStruct callbackStruct; 
    callbackStruct.inputProc = playbackCallback; //!!****assignment from incompatible pointer warning here *****!!!!!! 
    //set the reference to "self" this becomes *inRefCon in the playback callback 
    callbackStruct.inputProcRefCon = self; 

    status = AudioUnitSetProperty(audioUnit, 
            kAudioUnitProperty_SetRenderCallback, 
            kAudioUnitScope_Global, 
            kOutputBus, 
            &callbackStruct, 
            sizeof(callbackStruct)); 

    // Initialise 
    status = AudioUnitInitialize(audioUnit); // error check this status 


} 

回调

static OSStatus playbackCallback (

            void      *inRefCon,  // A pointer to a struct containing the complete audio data 
            // to play, as well as state information such as the 
            // first sample to play on this invocation of the callback. 
            AudioUnitRenderActionFlags *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence 
            // between sounds; for silence, also memset the ioData buffers to 0. 
             AudioTimeStamp  *inTimeStamp, // Unused here. 
            UInt32      inBusNumber, // The mixer unit input bus that is requesting some new 
            //  frames of audio data to play. 
            UInt32      inNumberFrames, // The number of frames of audio to provide to the buffer(s) 
            //  pointed to by the ioData parameter. 
            AudioBufferList    *ioData   // On output, the audio data to play. The callback's primary 
            //  responsibility is to fill the buffer(s) in the 
            //  AudioBufferList. 
            ) { 


    Engine *remoteIOplayer = (Engine *)inRefCon; 
    AudioUnitSampleType *outSamplesChannelLeft; 
    AudioUnitSampleType *outSamplesChannelRight; 

    outSamplesChannelLeft     = (AudioUnitSampleType *) ioData->mBuffers[0].mData; 
    outSamplesChannelRight = (AudioUnitSampleType *) ioData->mBuffers[1].mData; 

    int thetime =0; 
    thetime=remoteIOplayer.sampletime; 


     for (int frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) 
     { 
      // get NextPacket returns a 32 bit value, one frame. 
      AudioUnitSampleType *suml=0; 
      AudioUnitSampleType *sumr=0; 

      //NSLog (@"frame number - %i", frameNumber); 
      for(int j=0;j<10;j++) 

      { 


       AudioUnitSampleType valuetoaddl=0; 
       AudioUnitSampleType valuetoaddr=0; 


       //valuetoadd = [remoteIOplayer getSample:j ]; 
       valuetoaddl = [remoteIOplayer getNonInterleavedSample:j currenttime:thetime channel:0 ]; 
       //valuetoaddl = [remoteIOplayer getSample:j]; 
       valuetoaddr = [remoteIOplayer getNonInterleavedSample:j currenttime:thetime channel:1 ]; 

       suml = suml+(valuetoaddl/10); 
       sumr = sumr+(valuetoaddr/10); 

      } 


      outSamplesChannelLeft[frameNumber]=(AudioUnitSampleType) suml; 
      outSamplesChannelRight[frameNumber]=(AudioUnitSampleType) sumr; 


      remoteIOplayer.sampletime +=1; 


     } 

    return noErr; 
} 

我的声音取回功能

-(AudioUnitSampleType) getNonInterleavedSample:(int) index currenttime:(int)time channel:(int)ch; 

{ 

    AudioUnitSampleType returnvalue= 0; 

    soundStruct snd=soundStructArray[index];  
    UInt64 sn= snd.frameCount; 
    UInt64 st=sampletime; 
    UInt64 read= (UInt64)(st%sn); 


    if(ch==0) 
    { 
     if (snd.sendvalue==1) { 
      returnvalue = snd.audioDataLeft[read]; 

     }else { 
      returnvalue=0; 
     } 

    }else if(ch==1) 

    { 
     if (snd.sendvalue==1) { 
     returnvalue = snd.audioDataRight[read]; 
     }else { 
      returnvalue=0; 
     } 

     soundStructArray[index].sampleNumber=read; 
    } 


    if(soundStructArray[index].sampleNumber >soundStructArray[index].frameCount) 
    { 
     soundStructArray[index].sampleNumber=0; 

    } 

    return returnvalue; 


} 

编辑1

针对@andre我改变了我的回调以下,但它仍然没有帮助。

static OSStatus playbackCallback (

            void      *inRefCon,  // A pointer to a struct containing the complete audio data 
            // to play, as well as state information such as the 
            // first sample to play on this invocation of the callback. 
            AudioUnitRenderActionFlags *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence 
            // between sounds; for silence, also memset the ioData buffers to 0. 
             AudioTimeStamp  *inTimeStamp, // Unused here. 
            UInt32      inBusNumber, // The mixer unit input bus that is requesting some new 
            //  frames of audio data to play. 
            UInt32      inNumberFrames, // The number of frames of audio to provide to the buffer(s) 
            //  pointed to by the ioData parameter. 
            AudioBufferList    *ioData   // On output, the audio data to play. The callback's primary 
            //  responsibility is to fill the buffer(s) in the 
            //  AudioBufferList. 
            ) { 


    Engine *remoteIOplayer = (Engine *)inRefCon; 
    AudioUnitSampleType *outSamplesChannelLeft; 
    AudioUnitSampleType *outSamplesChannelRight; 

    outSamplesChannelLeft     = (AudioUnitSampleType *) ioData->mBuffers[0].mData; 
    outSamplesChannelRight = (AudioUnitSampleType *) ioData->mBuffers[1].mData; 

    int thetime =0; 
    thetime=remoteIOplayer.sampletime; 


     for (int frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) 
     { 
      // get NextPacket returns a 32 bit value, one frame. 
      AudioUnitSampleType suml=0; 
      AudioUnitSampleType sumr=0; 

      //NSLog (@"frame number - %i", frameNumber); 
      for(int j=0;j<16;j++) 

      { 



       soundStruct snd=remoteIOplayer->soundStructArray[j]; 
       UInt64 sn= snd.frameCount; 
       UInt64 st=remoteIOplayer.sampletime; 
       UInt64 read= (UInt64)(st%sn); 

       suml+= snd.audioDataLeft[read]; 
       suml+= snd.audioDataRight[read]; 


      } 


      outSamplesChannelLeft[frameNumber]=(AudioUnitSampleType) suml; 
      outSamplesChannelRight[frameNumber]=(AudioUnitSampleType) sumr; 


      remoteIOplayer.sampletime +=1; 


     } 

    return noErr; 
} 

回答

3
  1. 就像Andre说的那样,最好不要在回调中有任何Objective-C函数调用。您还应该将inputProcRefCon更改为C-Struct而不是Objective-C对象。

  2. 此外,它看起来像你可能会逐帧“手动”将数据复制到缓冲区。相反,使用memcopy复制一大块数据。

  3. 此外,我很确定你没有在回调中执行磁盘I/O,但是如果你是你不应该这样做。

1

我假设你是CPU限制的;模拟器在处理速度方面比各种设备强大得多。

回调可能无法跟上它被调用的频率。

编辑:你可以“预先计算”混合(提前做或在另一个线程中),以便它在回调触发时已经混合,并且回调有更少的工作要做?

+0

我有一个icnkling它可能是这样的,但我对我的生活不能找出改变每个回调的帧数确认 – dubbeat 2010-12-01 15:00:07

+0

我不相信你能控制的帧数该回调被要求填写(inNumberFrames)。从理论上讲,你必须为那些可能会有所不同的场景做好准备(实际上,我一直认为这个值是512)。 – 2010-12-01 16:03:14

2

以我的经验,尽量不要在RemoteIO回调中使用Objective-C函数调用。他们会放慢速度。尝试使用C结构移动Callback中的“getNonInterleavedSample”函数以访问音频数据。