我正在尝试为每个非数字属性创建依赖关系列,并从UCI中删除成人数据集中的那些非数字属性。我正在使用sklearn.feature_extraction.text lib中的CountVectorizer。但是我的程序说,我被卡住了,np.nan是一个无效的文件,预期的字节或unicode字符串。"
我只是想明白为什么我会收到这个错误。任何人都可以帮助我,谢谢。
这里是我的代码,
import pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
def check(ex):
try:
int(ex)
return False
except ValueError:
return True
feature_cols = ['age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'Target']
data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data', header=None, names = feature_cols)
feature_cols.remove('Target')
X = data[feature_cols]
y = data['Target']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 1)
columns = X.columns
vect = CountVectorizer()
for each in columns:
if check(X[each][1]):
temp = X[each]
X_dtm = pd.DataFrame(vect.fit_transform(temp).toarray(), columns = vect.get_feature_names())
X = pd.merge(X, X_dtm, how='outer')
X = X.drop(each, 1)
print X.columns
错误就像这样
追踪(最近一次通话): 文件" /home/amey/prog/pd.py" ;,第41行,in X_dtm = pd.DataFrame(vect.fit_transform(temp).toarray(),columns = vect.get_feature_names())
文件" /usr/lib/python2.7/dist-packages/sklearn/feature_extraction/text.py",第817行,在fit_transform中 self.fixed_vocabulary _)
文件" /usr/lib/python2.7/dist-packages/sklearn/feature_extraction/text.py",第752行,在_count_vocab 对于分析中的功能(doc):
File" /usr/lib/python2.7/dist-packages/sklearn/feature_extraction/text.py" ;,第238行,在 tokenize(preprocess(self.decode(doc))),stop_words)
File" /usr/lib/python2.7/dist-packages/sklearn/feature_extraction/text.py" ;,第118行,解码
raise ValueError("np.nan is an invalid document, expected byte or "
ValueError:np.nan是无效文档,预期字节或unicode字符串。
[退出代码1完成3.3s]