2013-01-22 62 views
7

我正在寻找一个启动和停止音频信号的按钮,为iOS构建一个令人难以置信的简单应用程序。信号只是一个正弦波,它将在整个回放过程中检查我的模型(音量的一个实例变量)并相应地更改其音量。iOS - 生成并播放无限期,简单的音频(正弦波)

我的困难与任务的不确定性有关。我了解如何构建表格,填充数据,响应按钮等等;然而,当谈到无限期地继续下去(在这种情况下,声音)时,我有点卡住了!任何指针都会很棒!

感谢您的阅读。

+0

这可能是AVAudioPlayer是我需要开始... – Rogare

+1

如果您只是想播放预先创建的正弦波声音文件(您将能够控制音量,但没有其他任何内容,例如频率),AVAudioPlayer将成为路线。 – admsyn

回答

15

这是一个简单的应用程序,它将播放按需生成的频率。你还没有指定是否执行iOS或OSX,所以我已经选择了OSX,因为它稍微简单一些(不会搞乱音频会话类别)。如果您需要iOS,您可以通过查看音频会话类别基础知识并交换RemoteIO音频单元的默认输出音频单元来找出缺失的位。

请注意,这样做的目的纯粹是为了演示一些核心音频/音频单元的基础知识。如果你想开始变得比这更复杂,你可能需要查看AUGraph API(为了提供一个干净的例子,我没有做任何错误检查。总是在处理时做错误检查核心音频)。

您需要将AudioToolboxAudioUnit框架添加到您的项目才能使用此代码。

#import <AudioToolbox/AudioToolbox.h> 

@interface SWAppDelegate : NSObject <NSApplicationDelegate> 
{ 
    AudioUnit outputUnit; 
    double renderPhase; 
} 
@end 

@implementation SWAppDelegate 

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification 
{ 
// First, we need to establish which Audio Unit we want. 

// We start with its description, which is: 
    AudioComponentDescription outputUnitDescription = { 
     .componentType   = kAudioUnitType_Output, 
     .componentSubType  = kAudioUnitSubType_DefaultOutput, 
     .componentManufacturer = kAudioUnitManufacturer_Apple 
    }; 

// Next, we get the first (and only) component corresponding to that description 
    AudioComponent outputComponent = AudioComponentFindNext(NULL, &outputUnitDescription); 

// Now we can create an instance of that component, which will create an 
// instance of the Audio Unit we're looking for (the default output) 
    AudioComponentInstanceNew(outputComponent, &outputUnit); 
    AudioUnitInitialize(outputUnit); 

// Next we'll tell the output unit what format our generated audio will 
// be in. Generally speaking, you'll want to stick to sane formats, since 
// the output unit won't accept every single possible stream format. 
// Here, we're specifying floating point samples with a sample rate of 
// 44100 Hz in mono (i.e. 1 channel) 
    AudioStreamBasicDescription ASBD = { 
     .mSampleRate  = 44100, 
     .mFormatID   = kAudioFormatLinearPCM, 
     .mFormatFlags  = kAudioFormatFlagsNativeFloatPacked, 
     .mChannelsPerFrame = 1, 
     .mFramesPerPacket = 1, 
     .mBitsPerChannel = sizeof(Float32) * 8, 
     .mBytesPerPacket = sizeof(Float32), 
     .mBytesPerFrame = sizeof(Float32) 
    }; 

    AudioUnitSetProperty(outputUnit, 
         kAudioUnitProperty_StreamFormat, 
         kAudioUnitScope_Input, 
         0, 
         &ASBD, 
         sizeof(ASBD)); 

// Next step is to tell our output unit which function we'd like it 
// to call to get audio samples. We'll also pass in a context pointer, 
// which can be a pointer to anything you need to maintain state between 
// render callbacks. We only need to point to a double which represents 
// the current phase of the sine wave we're creating. 
    AURenderCallbackStruct callbackInfo = { 
     .inputProc  = SineWaveRenderCallback, 
     .inputProcRefCon = &renderPhase 
    }; 

    AudioUnitSetProperty(outputUnit, 
         kAudioUnitProperty_SetRenderCallback, 
         kAudioUnitScope_Global, 
         0, 
         &callbackInfo, 
         sizeof(callbackInfo)); 

// Here we're telling the output unit to start requesting audio samples 
// from our render callback. This is the line of code that starts actually 
// sending audio to your speakers. 
    AudioOutputUnitStart(outputUnit); 
} 

// This is our render callback. It will be called very frequently for short 
// buffers of audio (512 samples per call on my machine). 
OSStatus SineWaveRenderCallback(void * inRefCon, 
           AudioUnitRenderActionFlags * ioActionFlags, 
           const AudioTimeStamp * inTimeStamp, 
           UInt32 inBusNumber, 
           UInt32 inNumberFrames, 
           AudioBufferList * ioData) 
{ 
    // inRefCon is the context pointer we passed in earlier when setting the render callback 
    double currentPhase = *((double *)inRefCon); 
    // ioData is where we're supposed to put the audio samples we've created 
    Float32 * outputBuffer = (Float32 *)ioData->mBuffers[0].mData; 
    const double frequency = 440.; 
    const double phaseStep = (frequency/44100.) * (M_PI * 2.); 

    for(int i = 0; i < inNumberFrames; i++) { 
     outputBuffer[i] = sin(currentPhase); 
     currentPhase += phaseStep; 
    } 

    // If we were doing stereo (or more), this would copy our sine wave samples 
    // to all of the remaining channels 
    for(int i = 1; i < ioData->mNumberBuffers; i++) { 
     memcpy(ioData->mBuffers[i].mData, outputBuffer, ioData->mBuffers[i].mDataByteSize); 
    } 

    // writing the current phase back to inRefCon so we can use it on the next call 
    *((double *)inRefCon) = currentPhase; 
    return noErr; 
} 

- (void)applicationWillTerminate:(NSNotification *)notification 
{ 
    AudioOutputUnitStop(outputUnit); 
    AudioUnitUninitialize(outputUnit); 
    AudioComponentInstanceDispose(outputUnit); 
} 

@end 

您可以随意调用AudioOutputUnitStart()AudioOutputUnitStop()启动/停止生产音频。如果你想动态改变频率,你可以传递一个指向struct的指针,该指针包含renderPhase double和另一个代表你想要的频率。

请注意呈现回调。它是从实时线程调用的(不是与主运行循环相同的线程)。渲染回调受到一些相当严格的时间要求,这意味着有很多事情你不应该在回调做,如:

  • 从文件分配内存
  • 等待一个互斥
  • 阅读在磁盘上
  • Objective-C的消息(是的,严重的。)

请注意,这是不是这样做的唯一途径。我只是用这种方式演示了它,因为你已经标记了这个核心音频。如果您不需要更改频率,则只需使用带有包含正弦波的预制声音文件的AVAudioPlayer即可。

还有Novocaine,它隐藏了很多这种冗长的你。您还可以查看Audio Queue API,它与我编写的Core Audio示例非常相似,但将其从硬件中分离出一点点(即对渲染回调的表现不太严格)。

+0

非常感谢!我在问题标题中加入了“iOS”,但很抱歉,我应该在问题主体中添加标签和/或注释。我现在要解决这个问题。 – Rogare

+0

@Rogare好点,我错过了!我的目标实际上只是演示核心音频中的一些概念,以便您可以开始使用。也就是说:如果你开始深入研究,你几乎肯定会有更多的问题:p。祝你好运! – admsyn

+0

如何生成开启信号或关闭信号? –