我要删除输入文件的所有特殊字符(包括单个小写字母),除了带有下划线符号(_)和连字符(-)的单词(添加示例)外,使用带或不带正则表达式的未展平字符清单(清单清单)?
有什么办法吗?
import re
from nltk import word_tokenize
from nltk.corpus import stopwords
data = {'fresh air', 'entertainment system', 'ice cream', 'blood pressure', 'body temperature', 'car', 'ac', 'air quality'}
data = {i: i.replace(" ", "_") for i in data}
pattern = re.compile(r"\b("+"|".join(data)+r")\b")
text_file = ['https://stackoverflow.com/questions', 'www.google.com/the pda', 'Z\'s is (vitamin-d) in (milk) 5 ml "enough", carrying? 321 active automatic body hi+al blood pressure.', 'body temperature [try] to improve air quality level by automatic intake of fresh air.', 'blood pressure monitor', 'I buy more ice cream', 'proper method to add frozen wild blueberries in ice cream']
sw0 = (stopwords.words('english'))
sw1 = ["i", "hi"]
sw=sw0+sw1
result = [pattern.sub(lambda x: "{}".format(data[x.group()]), i) for i in text_file]
tokens = [[word.lower() for word in word_tokenize(word)] for word in result]
filtered_tokens = [[token for token in sentence if (token not in sw)] for sentence in tokens]
print(filtered_tokens)
输入显示在代码中的文本文件中。
我希望输出仅显示单词(保留一些带下划线的单词,例如体温,血压,空气质量等)。
我已经尝试并完成将regexp_tokenize
与regex pattern
一起使用并设置gaps = True
。我想使用nltk word_tokenize
实现相同的目的。任何帮助表示赞赏。
预期输出:
[['pda'], ['vitamin-d', 'milk', 'ml', 'enough', 'carrying', 'active', 'automatic', 'body', 'blood_pressure'], ['body_temperature', 'try', 'improve', 'air_quality', 'level', 'automatic', 'intake', 'fresh_air'], ['blood_pressure', 'monitor'], ['buy', 'ice-cream'], ['proper', 'method', 'add', 'frozen', 'wild', 'blueberries', 'ice_cream']]