'Issue rendering offline to AVAudioPCMBuffer
I am trying to use Audiokit’s AudioEngine.Render() function to synthesize some audio and write it to an AVAudioPCMBuffer. I believe I am using the render function incorrectly. I haven’t been able to find any examples of this. Below is my code. What is the proper usage for this? I am assuming my process is incorrect as I am setting this up similar to a realtime renderer solution.
import AudioKit
import SoundpipeAudioKit
import AVFAudio
class RenderIssue
{
var engine = AudioEngine()
var whtns = WhiteNoise()
var filter: BandPassFilter
init(){
whtns.amplitude = 1
filter = BandPassFilter(whtns)
filter.bandwidth = 200
filter.centerFrequency = 1000
engine.output = filter
do{
try engine.start()
}
catch let err
{
print(err)
}
whtns.start()
filter.start()
}
// Called from button click
func startRender()
{
// buffer creation
var buffer = AVAudioPCMBuffer(
pcmFormat: engine.avEngine.outputNode.outputFormat(forBus: 0),
frameCapacity: 48000
)
// Try 1
buffer = engine.render(duration: 1)
// Try 2
do
{
try engine.avEngine.renderOffline(1, to: buffer!)
}
catch let err
{
print("Render Error" + String(describing: err))
}
}
}
On both attempts I get the following issue
AVAEInternal.h:76 required condition is false: [AVAudioBuffer.mm:286:-[AVAudioPCMBuffer initWithPCMFormat:frameCapacity:]: (isPCMFormat(fmt))]
2022-02-03 11:42:08.248120-0800 SynthClap[54704:1387670] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: isPCMFormat(fmt)'
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
