'keras vs bag of words
snippet for tokenizer for text
tk = keras.preprocessing.text.Tokenizer(nb_words=500, filters=keras.preprocessing.text.base_filter(), lower=True, split=" ")
tk.fit_on_texts(x)
x = tk.texts_to_sequences(x)
what is the use of a bag of words when we can assign a unique id to each word using the above code ? Please help me understand the importance of bow vs the above process
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
