'alter specific characters in pandas
Background
I have the following df which contains tokenized Text and P_Name columns and is a modification of including word boundary in string modification to be more specific
P_Name = [list(['Steven', 'I','Jobs']),
list(['A', 'Ma', 'Mary']),
list(['U','Kar', 'Jacob'])]
Text = [list(['Steven', 'I','Jobs', 'likes', 'apples','I', 'too']),
list(['A','i', 'like', 'A', 'lot', 'of','sports','cares', 'A', 'Ma', 'Mary']),
list(['the','U', 'Kar', 'Jacob', 'what', 'about', 'U', 'huh','?'])]
import pandas as pd
df = pd.DataFrame({'Text' : Text,
'P_ID': [1,2,3],
'P_Name' : P_Name,
})
df
P_ID P_Name Text
0 1 [Steven, I, Jobs] [Steven, I, Jobs, likes, apples, I, too]
1 2 [A, Ma, Mary] [A, i, like, A, lot, of, sports, cares, A, Ma, Mary]
2 3 [U, Kar, Jacob] [the, U, Kar, Jacob, what, about, U, huh, ?]
Goal
1) use the name in P_Name to block the corresponding text in the Text column by placing **block**
2) produce a new column New_Text
Tried
From including word boundary in string modification to be more specific
I have modified the code and tried the following
df['New_Text']=[pd.Series(x).replace(dict.fromkeys(y,'**block**') ).str.cat(sep=' ')for x , y in zip(df['Text'],df['P_Name'])]
Which gives close to what I want but not quite since some letters are being inappropriately labeled **block** e.g I in row 0
P_ID P_Name Text New_Text
0 [**block**, **block**, **block**, likes, apples, **block**, too]
1 [**block**, i, like, **block**, lot, of, sports, cares, **block**, **block**, **block**]
2 [the, **block**, **block**, **block**, what, about, **block**, huh, ?]
Desired Output
P_ID P_Name Text New_Text
0 [**block**, **block**, **block**, likes, apples, I, too]
1 [A, i, like, A, lot, of, sports, cares, **block**, **block**, **block**]
2 [the, **block**, **block**, **block**, what, about, U, huh, ?]
Question
How do I further modify
df['New_Text']=[pd.Series(x).replace(dict.fromkeys(y,'**block**') ).str.cat(sep=' ')for x , y in zip(df['Text'],df['P_Name'])]
or use new code to achieve my desired output?
Solution 1:[1]
You want each ordered occurrence of the P_Name sequence in the Text tokens. This can be achieved by iterating over the Text tokens & checking for equality on the entire P_Name tokens:
df["New_Text"] = df["Text"].apply(lambda tokens: tokens.copy()) # copy original tokens
for tokens, name in zip(df["New_Text"], df["P_Name"]):
for i, token in enumerate(tokens):
if tokens[i:i + len(name)] == name:
tokens[i:i + len(name)] = ["**block**"] * len(tokens[i:i + len(name)])
Depending on your use case you might have the untokenised Text (& P_name) available. If so, then substring matching can be done instead, & then perform tokenisation after.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
