想从数据帧中获取注释列表,首先解析为句子列表,然后第二遍,按单词解析。需要此作为word2vec模型(基因)的输入。
已经使用过来自nltk的send_tokenize来一次令牌化,但是如果之后尝试使用word_tokenize,则会遇到问题,因为它不再是字符串,并且期望像对象这样的字符串或字节。
import nltk
print(df)
ID Comment
0 Today is a good day.
1 Today I went by the river. The river also flow...
2 The water by the river is blue, it also feels ...
3 Today is the last day of spring; what to do to...
df['sentences']=df['Comment'].dropna().apply(nltk.sent_tokenize)
df['word']=df['sentences'].dropna().apply(nltk.word_tokenize)
试图将句子翻译成单词后 TypeError:预期的字符串或类似字节的对象
答案 0 :(得分:0)
I guess the problem is as you have none null values so you can try
df['word']=df['sentences'].apply(nltk.word_tokenize)