ValueError:空词汇表

时间:2018-11-25 22:05:52

标签: python python-3.x scikit-learn

我是Python的新手,正在尝试创建文本分类程序作为学校工作的一部分。

使用以下代码以及包括NumPy,scikit-learn和其他代码在内的各种(未经编辑)库,我一直遇到相同的错误:

Traceback (most recent call last):
  File "C:/Users/esg1/Python/Learning Python/stackabuse.com example/MediaBiasDetectionClassification.py", line 49, in <module>
    X = vectorizer.fit_transform(documents).toarray()
  File "C:\Users\esg1\Python\lib\site-packages\sklearn\feature_extraction\text.py", line 1010, in fit_transform
    vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary_)
  File "C:\Users\esg1\Python\lib\site-packages\sklearn\feature_extraction\text.py", line 941, in _count_vocab
    raise ValueError("empty vocabulary; perhaps the documents only contain stop words")
ValueError: empty vocabulary; perhaps the documents only contain stop words

我正在使用的代码是:

#importing libraries
import numpy as np  
import re  
import nltk  
from sklearn.datasets import load_files    
import pickle  
from nltk.corpus import stopwords

#importing the dataset
mediaBias_data = load_files(r"C:\Users\esg1\Desktop\Course\Year 3\Individual Project\Data Gathering")  
X, y = mediaBias_data.data, mediaBias_data.target  


#text preprocessing
documents = []

for sen in range(0, len(X)):  
    # Remove all the special characters
    document = re.sub(r'\W', ' ', str(X[sen]))

    # remove all single characters
    document = re.sub(r'\s+[a-zA-Z]\s+', ' ', document)

    # Remove single characters from the start
    document = re.sub(r'\^[a-zA-Z]\s+', ' ', document) 

    # Substituting multiple spaces with single space
    document = re.sub(r'\s+', ' ', document, flags=re.I)

    # Removing prefixed 'b'
    document = re.sub(r'^b\s+', '', document)

    # Converting to Lowercase
    document = document.lower()

    # Lemmatization
    document = document.split()

    document = [stemmer.lemmatize(word) for word in document]
    document = ' '.join(document)

    documents.append(document)

#converting text to numbers

#Bag of words
from sklearn.feature_extraction.text import CountVectorizer  
vectorizer = CountVectorizer(max_features=1500, min_df=5, max_df=0.7, stop_words=stopwords.words('english'))  
X = vectorizer.fit_transform(documents).toarray()  


#finding Term Frequency Inverse Document Frequency (TFIDF)
    #TF
#TermFrequency = (Number of Occurrences of a word)/(Total words in the document)  
    #IDF
#IDF(word) = Log((Total number of documents)/(Number of documents containing the word))  

    #TFIDF
from sklearn.feature_extraction.text import TfidfTransformer  
tfidfconverter = TfidfTransformer()  
X = tfidfconverter.fit_transform(X).toarray()  

#training and testing sets
from sklearn.model_selection import train_test_split  
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

#training test classification model and predicting sentiment
classifier = RandomForestClassifier(n_estimators=1000, random_state=0)  
classifier.fit(X_train, y_train)  

    #predicting
y_pred = classifier.predict(X_test) 

#evaluating the model
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score

print(confusion_matrix(y_test,y_pred))  
print(classification_report(y_test,y_pred))  
print(accuracy_score(y_test, y_pred)) 

#saving and loading the model
    #save
with open('text_classifier', 'wb') as picklefile:  
    pickle.dump(classifier,picklefile)
    #load
with open('text_classifier', 'rb') as training_model:  
    model = pickle.load(training_model)

#We loaded our trained model and stored it in the model variable.
#Let's predict the sentiment for the test set using our loaded model and see if we can get the same results.
#Execute the following script: y_pred2 = model.predict(X_test)

print(confusion_matrix(y_test, y_pred2))  
print(classification_report(y_test, y_pred2))  
print(accuracy_score(y_test, y_pred2))  

任何有关如何克服该错误的建议,将不胜感激!

0 个答案:

没有答案