'Speed up ndarray process of RNN model using Google Colab
I'm working on a RNN model to recognize speech keywords. I found out that the code below need to run at least 5 minutes in google colab. Is there any way of coding can speed up the code below? Because there's more than 8000 files in dataset and google colab keep prompting me running up the ram, I wish I will not use so much time to execute the code below.
max_data = 1000
files = {}
X_audio = np.zeros((max_data*8,
testParts.shape[0], testParts.shape[1],
testParts.shape[2]))
Y_word = np.zeros((max_data*8, 8))
wordToId, idToWord = {}, {}
for i, word in enumerate(words):
wordToId[word], idToWord[i] = i, word
files[word] = glob.glob('data/mini_speech_commands/'+word+'/*.wav')
for nb in range(0, max_data):
for i, word in enumerate(words):
audio = audioToTensor(files[word][nb])
X_audio[len(files)*nb + i] = audio
Y_word[len(files)*nb + i] =
np.array(to_categorical([i], num_classes=len(words))[0])
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
