'Convert AVAudioPCMBuffer into MLMultiArray and get prediction from CoreML model
I try to send AVAudioPCMBuffer into a coreML model and get the output from it. Input of the model is MultiArray (Float32 0 × 64 × 0) and output is MultiArray (Float32 0 × 0 × 36).
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in let buff = buffer.floatChannelData?.pointee
let config = MLModelConfiguration()
let hafiz = try! asr_hafiz(configuration: config)
if let buff = buff {
let mlData = try! MLMultiArray(dataPointer: buff, shape: [0,64,0], dataType: .float32, strides: [0,64,0])
let input = asr_hafizInput(audio_signal: mlData)
let options = MLPredictionOptions()
let output = try! hafiz.model.prediction(from: input, options: options)
} else {
print("buff is nil")
}
}
I tried to convert AVAudioPCMBuffer into MLMultiArray model accepts the input but when prediction method is called I get the following error
2022-05-14 12:18:02.842630+0300 VoiceRecognitionTest[7329:5846863] [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Invalid state": Null output blobs [Exception from Layer: 1: Conv_0]
2022-05-14 12:18:02.842686+0300 VoiceRecognitionTest[7329:5846863] [coreml] Error computing NN outputs -1
2022-05-14 12:18:02.842722+0300 VoiceRecognitionTest[7329:5846863] [coreml] Failure in -executePlan:error:.
VoiceRecognitionTest/ViewController.swift:252: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Error computing NN outputs." UserInfo={NSLocalizedDescription=Error computing NN outputs.}
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
