我有一个应该对文本文档列表进行预处理的代码。即:给定一个文本文档列表,它返回一个列表,其中每个文本文档都经过了预处理。但是由于某种原因,它无法删除标点符号。
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
nltk.download("stopwords")
nltk.download('punkt')
nltk.download('wordnet')
def preprocess(docs):
"""
Given a list of documents, return each documents as a string of tokens,
stripping out punctuation
"""
clean_docs = [clean_text(i) for i in docs]
tokenized_docs = [tokenize(i) for i in clean_docs]
return tokenized_docs
def tokenize(text):
"""
Tokenizes text -- returning the tokens as a string
"""
stop_words = stopwords.words("english")
nltk_tokenizer = nltk.WordPunctTokenizer().tokenize
tokens = nltk_tokenizer(text)
result = " ".join([i for i in tokens if not i in stop_words])
return result
def clean_text(text):
"""
Cleans text by removing case
and stripping out punctuation.
"""
new_text = make_lowercase(text)
new_text = remove_punct(new_text)
return new_text
def make_lowercase(text):
new_text = text.lower()
return new_text
def remove_punct(text):
text = text.split()
punct = string.punctuation
new_text = " ".join(word for word in text if word not in string.punctuation)
return new_text
# Get a list of titles
s1 = "[UPDATE] I am tired"
s2 = "I am cold."
clean_docs = preprocess([s1, s2])
print(clean_docs)
打印输出:
['[ update ] tired', 'cold .']
换句话说,它不会删除标点符号,因为“ [”,“]”和“。”全部出现在最终产品中。
答案 0 :(得分:0)
您要搜索标点中的单词。显然[UPDATE]
不是标点符号。
尝试在文本中搜索标点/替换标点:
import string
def remove_punctuation(text: str) -> str:
for p in string.punctuation:
text = text.replace(p, '')
return text
if __name__ == '__main__':
text = '[UPDATE] I am tired'
print(remove_punctuation(text))
# output:
# UPDATE I am tired