ValueError:unpack-python 2.7的值太多了

时间:2018-01-27 17:16:38

标签: python nltk

from textblob.classifiers import NaiveBayesClassifier
from textblob import TextBlob
train = []
infile = open('bt2.txt','r')
for line in infile:
   train.append(line.strip().split(','))
infile.close()
cl = NaiveBayesClassifier(train)
blob = TextBlob('Explain the advantages', classifier=cl)
print(blob.classify())

这是我的源代码.bt2.txt包含近200行逗号分隔的字符串和标签。我收到以下错误

traceback (most recent call last):
  File "<ipython-input-21-72fecccf89d9>", line 1, in <module>
    runfile('C:/Users/xxx/bt.py', wdir='C:/Users/xxx')
  File "C:\Users\xxx\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
    execfile(filename, namespace)
  File "C:\Users\xxx\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 87, in execfile
    exec(compile(scripttext, filename, 'exec'), glob, loc)
  File "C:/Users/xxx/bt.py", line 12, in <module>
    cl = NaiveBayesClassifier(train)
  File "C:\Users\xxx\Anaconda2\lib\site-packages\textblob\classifiers.py", line 205, in __init__
    super(NLTKClassifier, self).__init__(train_set, feature_extractor, format, **kwargs)
  File "C:\Users\xxx\Anaconda2\lib\site-packages\textblob\classifiers.py", line 139, in __init__
    self._word_set = _get_words_from_dataset(self.train_set)  # Keep a hidden set of unique words.
  File "C:\Users\xxx\Anaconda2\lib\site-packages\textblob\classifiers.py", line 63, in _get_words_from_dataset
    return set(all_words)
  File "C:\Users\xxx\Anaconda2\lib\site-packages\textblob\classifiers.py", line 62, in <genexpr>
    all_words = chain.from_iterable(tokenize(words) for words, _ in dataset)
ValueError: too many values to unpack

如何解决这个问题??

1 个答案:

答案 0 :(得分:1)

NaiveBayesClassifier类带有两个元素的元组列表,看起来你有两个以上的元素。

错误:

all_words = chain.from_iterable(tokenize(words) for words, _ in dataset)

在数据集(单词&amp; _)

中查找两个元素

检查train [0]是否为两个元素

<强> From the DOCS:

train = [
     ('I love this sandwich.', 'pos'),
     ('this is an amazing place!', 'pos')]