如何修复“ TypeError:预期的字符串或类似字节的对象”

时间:2019-01-25 22:58:41

标签: python error-handling preprocessor tfidfvectorizer

大家好,我有一个文本文档(text_data)列表,我想对其进行向量化,但是会引发错误TypeError: expected string or bytes-like object。当我只叫preprocess(text_data)却不叫tfidfconverter时,它可以工作。我找不到问题,有人可以帮我吗?

def preprocess(x):
    documents = []
    for sen in range(0, len(x)):

        # Remove all the special characters
        document = re.sub(r'\W', ' ', str(x[sen]))

        # Remove all numbers
        document = re.sub(r'[0-9]', ' ', document)

        # Remove all underscores
        document = re.sub(r'_', ' ', document)

        # remove all single characters
        document = re.sub(r'\s+[a-zA-Z]\s+', ' ', document)

        # Remove single characters from the start
        document = re.sub(r'\^[a-zA-Z]\s+', ' ', document)

        # Substituting multiple spaces with single space
        document = re.sub(r'\s+', ' ', document, flags=re.I)

        # Converting to Lowercase
        document = document.lower()

        # Lemmatization
        document = document.split()

        document = ' '.join([stemmer.stem(word) for word in document])
        documents.append(document)

    x = documents

tfidfconverter = TfidfVectorizer(min_df=10, max_df=0.97, stop_words=text.ENGLISH_STOP_WORDS, preprocessor=preprocess)

跟踪:

 Traceback (most recent call last):
 File "C:/Users/Konrad/PycharmProjects/treffen/treffen.py", line 54, in <module>
tfidf_table = tfidfconverter.fit_transform(text_data).toarray()
File "C:\Users\Konrad\PycharmProjects\treffen\venv\lib\site-packages\sklearn\feature_extraction\text.py", line 1603, in fit_transform
X = super(TfidfVectorizer, self).fit_transform(raw_documents)
File "C:\Users\Konrad\PycharmProjects\treffen\venv\lib\site-packages\sklearn\feature_extraction\text.py", line 1032, in fit_transform
self.fixed_vocabulary_)
File "C:\Users\Konrad\PycharmProjects\treffen\venv\lib\site-packages\sklearn\feature_extraction\text.py", line 942, in _count_vocab
for feature in analyze(doc):
File "C:\Users\Konrad\PycharmProjects\treffen\venv\lib\site-packages\sklearn\feature_extraction\text.py", line 328, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
File "C:\Users\Konrad\PycharmProjects\treffen\venv\lib\site-packages\sklearn\feature_extraction\text.py", line 265, in <lambda>
return lambda doc: token_pattern.findall(doc)
TypeError: expected string or bytes-like object

Process finished with exit code 1

1 个答案:

答案 0 :(得分:0)

我看到的第一个问题是预处理程序期望返回一个字符串。其次,您不需要重建documents列表,因为预处理器函数将在培训文档列表中的每个字符串上调用。您可以尝试如下操作:

def preprocess(x):
    # Remove all the special characters
    document = re.sub(r'\W', ' ', str(x[sen]))

    # Remove all numbers
    document = re.sub(r'[0-9]', ' ', document)

    # Remove all underscores
    document = re.sub(r'_', ' ', document)

    # remove all single characters
    document = re.sub(r'\s+[a-zA-Z]\s+', ' ', document)

    # Remove single characters from the start
    document = re.sub(r'\^[a-zA-Z]\s+', ' ', document)

    # Substituting multiple spaces with single space
    document = re.sub(r'\s+', ' ', document, flags=re.I)

    # Converting to Lowercase
    document = document.lower()

    # Lemmatization
    document = document.split()
    document = ' '.join([stemmer.stem(word) for word in document]) 

    return document


tfidfconverter = TfidfVectorizer(min_df=10, max_df=0.97, stop_words=text.ENGLISH_STOP_WORDS, preprocessor=preprocess)