我正在尝试提高代码的性能,我想对数据帧的2列进行标记,我就像这样了
submission_df['question1'] = submission_df.apply(lambda row: nltk.word_tokenize(row['question1']), axis=1)
submission_df['question2'] = submission_df.apply(lambda row: nltk.word_tokenize(row['question2']), axis=1)
我想也许我可以将它们合并在一行中,这样我只会在所有行(200万)上迭代一次,所以我想到这样的事情
submission_df['question1'],submission_df['question2'] = submission_df.apply
(lambda row:
(nltk.word_tokenize(row['question1']),
nltk.word_tokenize(row['question2'])), axis=1)
但没有用,也许还有其他方法来改进它,而不是使用apply方法。
答案 0 :(得分:1)
您可以简单地使用apply
来选择具有astype(str)的列,即
submission_df[['question1','question2']]=submission_df[['question1','question2']].astype(str).apply(lambda row: [nltk.word_tokenize(row['question1']),nltk.word_tokenize(row['question2'])], axis=1)
示例:
import nltk
df = pd.DataFrame({"A":["Nice to meet you ","Nice to meet you ","Nice to meet you ",8,9,10],"B":[7,6,7,"Nice to meet you ","Nice to meet you ","Nice to meet you "]})
df[['A','B']] = df[['A','B']].astype(str).apply(lambda row: [nltk.word_tokenize(row['A']),nltk.word_tokenize(row['B'])], axis=1)
输出:
A B 0 [Nice, to, meet, you] [7] 1 [Nice, to, meet, you] [6] 2 [Nice, to, meet, you] [7] 3 [8] [Nice, to, meet, you] 4 [9] [Nice, to, meet, you] 5 [10] [Nice, to, meet, you]