这个错误在Naive Bayes分类器中意味着什么?

时间:2015-06-05 05:56:40

标签: python

我是python的新手。实际上我有一个单词的火车数据。这是一个文章的集合。列车数据的每一行都是一篇文章。它包括2000行。列车数据的标签在另一个文件中,每个i标签等于列车数据中的i文章。我读了整个文件然后我确实在火车数据上产生了并且还删除了停用词。我使用NaiveBayes作为分类器,但发生了一个我不知道如何解决的错误。我很感激您的帮助。

我的代码是:

 import nltk
 from nltk import stem
 from nltk.corpus import stopwords
 from sklearn.naive_bayes import MultinomialNB
 stop = stopwords.words('english')

 list_of_articles=[]
 list_of_articles_test=[]
 stemmer=stem.PorterStemmer()
 from sklearn.feature_extraction.text import TfidfTransformer
 from sklearn.feature_extraction.text import CountVectorizer
 count_vect = CountVectorizer()
 tfidf_transformer = TfidfTransformer()

 Label_file = open('D:\\2nd semester\\NLP1\\exercise\\data\\labels_train.txt', 'r')
 data_labels = [line.split(',') for line in Label_file.readlines()]
 with open('D:\\2nd semester\\NLP1\\exercise\\data\\data_train.txt','r') as traindata:
      for line in traindata:
           words_in_article=" ".join([stemmer.stem(w)for w in line.split() if not w in stop])
             list_of_articles.append(words_in_article)


      X_train_counts = count_vect.fit_transform(list_of_articles)

      X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)

      print(X_train_tfidf)
      print(X_train_tfidf.shape)


   Label_file_test = open('D:\\2nd semester\\NLP1\\exercise\\data\\labels_valid.txt', 'r')
   data_labels_test = [line.split(',') for line in Label_file_test.readlines()]
   with open('D:\\2nd semester\\NLP1\\exercise\\data\\data_valid.txt','r') as testdata:
              for test in testdata:
                words_in_article_test=" ".join([stemmer.stem(w)for w in test.split() if not w in stop])
                list_of_articles_test.append(words_in_article_test)
           X_test_counts =  count_vect.fit_transform(list_of_articles_test)

             X_test_tfidf = tfidf_transformer.fit_transform(X_test_counts)

             print(X_test_tfidf)
             print(X_test_tfidf.shape)


       clf = MultinomialNB().fit(X_train_tfidf, data_labels)
       predicted = clf.predict(X_test_tfidf)
       for doc, category in zip(list_of_articles_test, predicted):
            print('%r => %s' % (doc, data_labels))

       The error:
              Warning (from warnings module):
              File "C:\Python34\lib\site-packages\sklearn\utils\validation.py", line 449
              y = column_or_1d(y, warn=True)
              DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
               Traceback (most recent call last):
               File "C:/Users/Maryam/Desktop/exercise/tfidf2.py", line 53, in <module>
               predicted = clf.predict(X_test_tfidf)
               File "C:\Python34\lib\site-packages\sklearn\naive_bayes.py", line 64, in predict
               jll = self._joint_log_likelihood(X)
               File "C:\Python34\lib\site-packages\sklearn\naive_bayes.py", line 615, in _joint_log_likelihood
               return (safe_sparse_dot(X, self.feature_log_prob_.T)
               File "C:\Python34\lib\site-packages\sklearn\utils\extmath.py", line 178, in safe_sparse_dot
ret = a * b
              File "C:\Python34\lib\site-packages\scipy\sparse\base.py", line 345, in __mul__
               raise ValueError('dimension mismatch')
               ValueError: dimension mismatch
               >>> 

1 个答案:

答案 0 :(得分:0)

X_train_counts是一个稀疏共生矩阵,它为您提供文档中所有单词(矩阵中的列)的频率计数(矩阵中的行)。换句话说,第(20, 5252) 1行告诉您单词5252(词汇表中带有该索引的单词)在文档20中出现一次。

但是,虽然您只有2000篇文章,但您的共生矩阵还有更多行。这是因为您错误地传递了数据。你的word_list只是一个单词列表,而它应该是一个文章列表,其中每篇文章都是一串(词干)单词。你应该做点什么:

list_of_articles = []
for line in traindata:
    words_in_article = " ".join([stemmer.stem(w) for w in line.split() if not w in stop])
    list_of_articles.append(words_in_article) 

X_train_counts = count_vect.fit_transform(list_of_articles)

正如您将看到的,我还根据停止列表调整了您的支票。由于您的停止列表可能包含单词 - 而不是词干 - 您应该检查单词是否在停止列表中,而不是它们的词干。