python功能提取:AttributeError:'list'对象没有属性'lower'

时间:2018-12-25 07:24:03

标签: python scikit-learn nltk feature-extraction

如果正在写这篇文章::

bow_vect = CountVectorizer(max_df=0.90, min_df=2, max_features=1000, stop_words='english')
bow = bow_vect.fit_transform(combi['tidy_tweet'])

我收到此错误:::

AttributeError                            Traceback (most recent call last)
<ipython-input-65-745529b5930e> in <module>
      1 bow_vect = CountVectorizer(max_df=0.90, min_df=2, max_features=1000, stop_words='english')
----> 2 bow = bow_vect.fit_transform(combi['tidy_tweet'])

c:\users\avinash\appdata\local\programs\python\python37\lib\site-packages\sklearn\feature_extraction\text.py in fit_transform(self, raw_documents, y)
   1010 
   1011         vocabulary, X = self._count_vocab(raw_documents,
-> 1012                                           self.fixed_vocabulary_)
   1013 
   1014         if self.binary:

c:\users\avinash\appdata\local\programs\python\python37\lib\site-packages\sklearn\feature_extraction\text.py in _count_vocab(self, raw_documents, fixed_vocab)
    920         for doc in raw_documents:
    921             feature_counter = {}
--> 922             for feature in analyze(doc):
    923                 try:
    924                     feature_idx = vocabulary[feature]

c:\users\avinash\appdata\local\programs\python\python37\lib\site-packages\sklearn\feature_extraction\text.py in <lambda>(doc)
    306                                                tokenize)
    307             return lambda doc: self._word_ngrams(
--> 308                 tokenize(preprocess(self.decode(doc))), stop_words)
    309 
    310         else:

c:\users\avinash\appdata\local\programs\python\python37\lib\site-packages\sklearn\feature_extraction\text.py in <lambda>(x)
    254 
    255         if self.lowercase:
--> 256             return lambda x: strip_accents(x.lower())
    257         else:
    258             return strip_accents

AttributeError: 'list' object has no attribute 'lower'

1 个答案:

答案 0 :(得分:1)

很可能不知道combi['tidy_tweet']的实际类型是什么,因为fit_transform期望字符串可迭代,并且您要为其提供Series。

combi['tidy_tweet']实际上应该是适合fit_transform起作用的字符串列表。目前看来它是一系列字符串列表。

所以最好的选择是将每一行(列表)中的标记连接到一个字符串中,将这些字符串打包到一个列表中,然后在其上使用fit_transform。