我正在尝试使用NLTK将电子邮件分类为垃圾邮件/火腿
下面是遵循的步骤:
尝试提取所有令牌
获取所有功能
从所有唯一单词和映射的语料库中提取特征 正确/错误
from nltk.classify.util import apply_features
from nltk import NaiveBayesClassifier
import pandas as pd
import collections
from sklearn.model_selection import train_test_split
from collections import Counter
data = pd.read_csv('https://raw.githubusercontent.com/venkat1017/Data/master/emails.csv')
"""fetch array of tuples where each tuple is defined by (tokenized_text, label)
"""
processed_tokens=data['text'].apply(lambda x:([x for x in x.split() if x.isalpha()]))
processed_tokens=processed_tokens.apply(lambda x:([x for x in x if len(x)>3]))
processed_tokens = [(i,j) for i,j in zip(processed_tokens,data['spam'])]
"""
dictword return a Set of unique words in complete corpus.
"""
list = zip(*processed_tokens)
dictionary = Counter(word for i, j in processed_tokens for word in i)
dictword = [word for word, count in dictionary.items() if count == 1]
"""maps each input text into feature vector"""
y_dict = ( [ (word, True) for word in dictword] )
feature_vec=dict(y_dict)
"""Training"""
training_set, testing_set = train_test_split(y_dict, train_size=0.7)
classifier = NaiveBayesClassifier.train(training_set)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\nltk\classify\naivebayes.py in train(cls, labeled_featuresets, estimator)
197 for featureset, label in labeled_featuresets:
198 label_freqdist[label] += 1
--> 199 for fname, fval in featureset.items():
200 # Increment freq(fval|label, fname)
201 feature_freqdist[label, fname][fval] += 1
AttributeError: 'str' object has no attribute 'items'
在尝试训练唯一单词的语料库时,我面临以下错误
答案 0 :(得分:1)
首先,我希望您知道y_dict
只是一个字典,它将在语料库中仅出现过一次的单词(字符串)映射为值True
的键。您要将其作为训练集传递给分类器,而应该传递tuple
的{{1}}(每个文本行的特征字典)和(相应的标签)。当您的分类器应该接收[({'feat1': 'value1', ... }, label_value), ...]
作为输入时,您正在传递[ ('word1', True), ... ]
。 string
类型没有items
属性,只有dict
有。因此是错误。
第二,您的数据建模错误。您的训练集应包含从data['text']
映射到data['spam']
值的功能字典(因为这是您的标签)。请在第here节1.3中了解如何使用nltk的分类器执行文档分类。