2015-10-15 144 views
1

背景:我找到了一个名为“AVAudioEngine in Practice”的Apple WWDC会话,并试图做出类似于43:35(https://youtu.be/FlMaxen2eyw?t=2614)所示的最后一个演示的类似内容。我使用的是SpriteKit而不是SceneKit,但原理是一样的:我想生成球体,将它们扔到身边,当它们发生碰撞时,发动声音,对每个球体都是独一无二的。有关使用AVAudioEngine的详细信息

问题:

  • 我想连接到每个SpriteKitNode,这样我可以发挥不同的声音对每个球的唯一AudioPlayerNode。即现在,如果我为每个AudioPlayerNode创建两个球体并设置不同的音高,则只有最近创建的AudioPlayerNode似乎正在播放,即使原始球体发生碰撞。在演示期间,他提到“我正在绑定一名球员,一名专职球员到每一球”。我会怎么做呢?

  • 每次发生新的碰撞时都会有声音点击/伪影。我假设这与AVAudioPlayerNodeBufferOptions和/或每次发生联系时我试图创建,调度和使用缓冲区的事实有很大关系,这不是最有效的方法。这将是一个很好的解决方法?

代码:由于在视频中提到,“...对于每一个真实降生到这个世界上的球,一个新的球员节点也创造”。我有一个球一个单独的类,有一个返回SpriteKitNode的方法,并创建一个AudioPlayerNode每次调用时间:

class Sphere { 

    var sphere: SKSpriteNode = SKSpriteNode(color: UIColor(), size: CGSize()) 
    var sphereScale: CGFloat = CGFloat(0.01) 
    var spherePlayer = AVAudioPlayerNode() 
    let audio = Audio() 
    let sphereCollision: UInt32 = 0x1 << 0 

    func createSphere(position: CGPoint, pitch: Float) -> SKSpriteNode { 

     let texture = SKTexture(imageNamed: "Slice") 
     let collisionTexture = SKTexture(imageNamed: "Collision") 

     // Define the node 

     sphere = SKSpriteNode(texture: texture, size: texture.size()) 

     sphere.position = position 
     sphere.name = "sphere" 
     sphere.physicsBody = SKPhysicsBody(texture: collisionTexture, size: sphere.size) 
     sphere.physicsBody?.dynamic = true 
     sphere.physicsBody?.mass = 0 
     sphere.physicsBody?.restitution = 0.5 
     sphere.physicsBody?.usesPreciseCollisionDetection = true 
     sphere.physicsBody?.categoryBitMask = sphereCollision 
     sphere.physicsBody?.contactTestBitMask = sphereCollision 
     sphere.zPosition = 1 

     // Create AudioPlayerNode 

     spherePlayer = audio.createPlayer(pitch) 

     return sphere  
    } 

这里是我的音频类与创建AudioPCMBuffers和AudioPlayerNodes

class Audio { 

let engine: AVAudioEngine = AVAudioEngine() 

func createBuffer(name: String, type: String) -> AVAudioPCMBuffer { 

    let audioFilePath = NSBundle.mainBundle().URLForResource(name as String, withExtension: type as String)! 
    let audioFile = try! AVAudioFile(forReading: audioFilePath) 
    let buffer = AVAudioPCMBuffer(PCMFormat: audioFile.processingFormat, frameCapacity: UInt32(audioFile.length)) 
    try! audioFile.readIntoBuffer(buffer) 

    return buffer 
} 

func createPlayer(pitch: Float) -> AVAudioPlayerNode { 

    let player = AVAudioPlayerNode() 
    let buffer = self.createBuffer("PianoC1", type: "wav") 
    let pitcher = AVAudioUnitTimePitch() 
    let delay = AVAudioUnitDelay() 
    pitcher.pitch = pitch 
    delay.delayTime = 0.2 
    delay.feedback = 90 
    delay.wetDryMix = 0 

    engine.attachNode(pitcher) 
    engine.attachNode(player) 
    engine.attachNode(delay) 

    engine.connect(player, to: pitcher, format: buffer.format) 
    engine.connect(pitcher, to: delay, format: buffer.format) 
    engine.connect(delay, to: engine.mainMixerNode, format: buffer.format) 

    engine.prepare() 
    try! engine.start() 

    return player 
} 
} 

在我GameScene类,然后我测试碰撞,安排一个缓冲并播放AudioPlayerNode如果接触发生

func didBeginContact(contact: SKPhysicsContact) { 

     let firstBody: SKPhysicsBody = contact.bodyA 

     if (firstBody.categoryBitMask & sphere.sphereCollision != 0) { 

     let buffer1 = audio.createBuffer("PianoC1", type: "wav") 
     sphere.spherePlayer.scheduleBuffer(buffer1, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Interrupts, completionHandler: nil) 
     sphere.spherePlayer.play() 

     } 
} 

我是新至S wift,只有基本的编程知识,所以任何建议/批评都是受欢迎的。

回答

2

我一直在努力AVAudioEngine在scenekit和尝试做别的事情,但是这将是你在找什么:

https://developer.apple.com/library/mac/samplecode/AVAEGamingExample/Listings/AVAEGamingExample_AudioEngine_m.html

它解释的过程: 1,您的实例化自己AVAudioEngine子类 2的方法来加载PCMBuffers每个AVAudioPlayer 3更改您的环境节点的参数为大量弹球的对象

编辑以适应混响:翻新,测试并添加了一些功能:

1-您创建AVAudioEngine的子类,将其命名为AudioLayerEngine。这是为了访问AVAudioUnit的效果,如失真,延迟,音高和许多其他音频单元效果。 2-初始化通过设置音频引擎的一些配置,比如渲染算法,暴露AVAudioEnvironmentNode以使用您的SCNNode对象或SKNode对象的3D位置进行播放(如果您处于2D但想要3D效果) 3-创建一些辅助方法为所需的每个AudioUnit效果加载预设值 4 - 创建一个帮助程序方法来创建音频播放器,然后将其添加到所需的任意节点,因为该SCNNode接受返回[AVAudioPlayer]的.audioPlayers方法或[SCNAudioPlayer] 5-开始播放。

我已经粘贴了整个类以供参考,以便您可以随意构造它,但请记住,如果将它与SceneKit或SpriteKit结合使用,则可以使用此audioEngine来管理所有声音,而不是SceneKit的内部AVAudioEngine。这意味着你的AwakeFromNib方法

import Foundation 
import SceneKit 
import AVFoundation 

class AudioLayerEngine:AVAudioEngine{ 
    var engine:AVAudioEngine! 
    var environment:AVAudioEnvironmentNode! 
    var outputBuffer:AVAudioPCMBuffer! 
    var voicePlayer:AVAudioPlayerNode! 
    var multiChannelEnabled:Bool! 
    //audio effects 
    let delay = AVAudioUnitDelay() 
    let distortion = AVAudioUnitDistortion() 
    let reverb = AVAudioUnitReverb() 

    override init(){ 
     super.init() 
engine = AVAudioEngine() 
environment = AVAudioEnvironmentNode() 

engine.attachNode(self.environment) 
voicePlayer = AVAudioPlayerNode() 
engine.attachNode(voicePlayer) 
voicePlayer.volume = 1.0 
     outputBuffer = loadVoice() 
     wireEngine() 
     startEngine() 
voicePlayer.scheduleBuffer(self.outputBuffer, completionHandler: nil) 
voicePlayer.play() 
    } 

    func startEngine(){ 
     do{ 
      try engine.start() 
     }catch{ 
      print("error loading engine") 
     } 
    } 

    func loadVoice()->AVAudioPCMBuffer{ 
     let URL = NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource("art.scnassets/sounds/interface/test", ofType: "aiff")!) 
     do{ 
      let soundFile = try AVAudioFile(forReading: URL, commonFormat: AVAudioCommonFormat.PCMFormatFloat32, interleaved: false) 
      outputBuffer = AVAudioPCMBuffer(PCMFormat: soundFile.processingFormat, frameCapacity: AVAudioFrameCount(soundFile.length)) 
      do{ 
      try soundFile.readIntoBuffer(outputBuffer) 
      }catch{ 
       print("somethign went wrong with loading the buffer into the sound fiel") 
      } 
      print("returning buffer") 
      return outputBuffer 
     }catch{ 
     } 
     return outputBuffer 
    } 

    func wireEngine(){ 
loadDistortionPreset(AVAudioUnitDistortionPreset.MultiCellphoneConcert) 
     engine.attachNode(distortion) 
     engine.attachNode(delay) 
engine.connect(voicePlayer, to: distortion, format: self.outputBuffer.format) 
     engine.connect(distortion, to: delay, format: self.outputBuffer.format) 
       engine.connect(delay, to: environment, format: self.outputBuffer.format) 
     engine.connect(environment, to: engine.outputNode, format: constructOutputFormatForEnvironment()) 

    } 

    func constructOutputFormatForEnvironment()->AVAudioFormat{ 
let outputChannelCount = self.engine.outputNode.outputFormatForBus(1).channelCount 
let hardwareSampleRate = self.engine.outputNode.outputFormatForBus(1).sampleRate 
let environmentOutputConnectionFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: outputChannelCount) 
multiChannelEnabled = false 
     return environmentOutputConnectionFormat 
    } 

    func loadDistortionPreset(preset: AVAudioUnitDistortionPreset){ 
     distortion.loadFactoryPreset(preset) 
} 

    func createPlayer(node: SCNNode){ 
     let player = AVAudioPlayerNode() 
distortion.loadFactoryPreset(AVAudioUnitDistortionPreset.SpeechCosmicInterference) 
engine.attachNode(player) 
engine.attachNode(distortion) 
engine.connect(player, to: distortion, format: outputBuffer.format) 
     engine.connect(distortion, to: environment, format: constructOutputFormatForEnvironment()) 
let algo = AVAudio3DMixingRenderingAlgorithm.HRTF 
     player.renderingAlgorithm = algo 
     player.reverbBlend = 0.3 
     player.renderingAlgorithm = AVAudio3DMixingRenderingAlgorithm.HRTF 
    } 

} 
+0

虽然此链接可以回答这个问题,最好是在这里有答案的主要部件,并提供链接以供参考时在gameView实例化这个。如果链接页面更改,则仅链接答案可能会失效。 - [来自评论](/ review/low-quality-posts/11350414) –

+2

@BeauNouvelle我已经编辑完整的测试代码和一个额外的功能 – triple7