'how to use word_tokenize in data frame

I have recently started using the nltk module for text analysis. I am stuck at a point. I want to use word_tokenize on a dataframe, so as to obtain all the words used in a particular row of the dataframe.

data example:
       text
1.   This is a very good site. I will recommend it to others.
2.   Can you please give me a call at 9983938428. have issues with the listings.
3.   good work! keep it up
4.   not a very helpful site in finding home decor. 

expected output:

1.   'This','is','a','very','good','site','.','I','will','recommend','it','to','others','.'
2.   'Can','you','please','give','me','a','call','at','9983938428','.','have','issues','with','the','listings'
3.   'good','work','!','keep','it','up'
4.   'not','a','very','helpful','site','in','finding','home','decor'

Basically, i want to separate all the words and find the length of each text in the dataframe.

I know word_tokenize can for it for a string, but how to apply it onto the entire dataframe?

Please help!

Thanks in advance...



Solution 1:[1]

pandas.Series.apply is faster than pandas.DataFrame.apply

import pandas as pd
import nltk

df = pd.read_csv("/path/to/file.csv")

start = time.time()
df["unigrams"] = df["verbatim"].apply(nltk.word_tokenize)
print "series.apply", (time.time() - start)

start = time.time()
df["unigrams2"] = df.apply(lambda row: nltk.word_tokenize(row["verbatim"]), axis=1)
print "dataframe.apply", (time.time() - start)

On a sample 125 MB csv file,

series.apply 144.428858995

dataframe.apply 201.884778976

Edit: You could be thinking the Dataframe df after series.apply(nltk.word_tokenize) is larger in size, which might affect the runtime for the next operation dataframe.apply(nltk.word_tokenize).

Pandas optimizes under the hood for such a scenario. I got a similar runtime of 200s by only performing dataframe.apply(nltk.word_tokenize) separately.

Solution 2:[2]

I will show you an example. Suppose you have a data frame named twitter_df and you have stored sentiment and text within that. So, first I extract text data into a list as follows

 tweetText = twitter_df['text']

then to tokenize

 from nltk.tokenize import word_tokenize

 tweetText = tweetText.apply(word_tokenize)
 tweetText.head()

I think this will help you

Solution 3:[3]

May need to add str() to convert to pandas' object type to a string.

Keep in mind a faster way to count words is often to count spaces.

Interesting that tokenizer counts periods. May want to remove those first, maybe also remove numbers. Un-commenting the line below will result in equal counts, at least in this case.

import nltk
import pandas as pd

sentences = pd.Series([ 
    'This is a very good site. I will recommend it to others.',
    'Can you please give me a call at 9983938428. have issues with the listings.',
    'good work! keep it up',
    'not a very helpful site in finding home decor. '
])

# remove anything but characters and spaces
sentences = sentences.str.replace('[^A-z ]','').str.replace(' +',' ').str.strip()

splitwords = [ nltk.word_tokenize( str(sentence) ) for sentence in sentences ]
print(splitwords)
    # output: [['This', 'is', 'a', 'very', 'good', 'site', 'I', 'will', 'recommend', 'it', 'to', 'others'], ['Can', 'you', 'please', 'give', 'me', 'a', 'call', 'at', 'have', 'issues', 'with', 'the', 'listings'], ['good', 'work', 'keep', 'it', 'up'], ['not', 'a', 'very', 'helpful', 'site', 'in', 'finding', 'home', 'decor']]

wordcounts = [ len(words) for words in splitwords ]
print(wordcounts)
    # output: [12, 13, 5, 9]

wordcounts2 = [ sentence.count(' ') + 1 for sentence in sentences ]
print(wordcounts2)
    # output: [12, 13, 5, 9]

If you aren't using Pandas, you might not need str()

Solution 4:[4]

Make it faster using pandarallel

  1. Using Spacy

    import spacy
    from pandarallel import pandarallel
    
    pandarallel.initialize(progress_bar=True)    
    nlp = spacy.load("en_core_web_sm")
    
    df['new_col'] = df['text'].parallel_apply(lambda x: nlp(x))
    
  2. Using NLTK

    import nltk
    from pandarallel import pandarallel
    
    pandarallel.initialize(progress_bar=True)
    
    df['new_col'] = df['text'].parallel_apply(lambda x: nltk.word_tokenize(x))
    

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 Yasuni Chamodya
Solution 3
Solution 4 Ramkrishan Sahu