我的分类模型准确度很低。即使我使用邻居= 1的K-Nearest Neighbors模型,模型仍然会出现许多错误。 logreg模型具有最高的准确度,它只是为每个样本预测0。我是ML的新手并试图弄清楚我做错了什么。我该如何改进模型?
# load the CSV file as a numpy matrix
dataset = np.loadtxt(raw_data, delimiter=",")
target = np.loadtxt(target_data, delimiter=",")
# separate the data from the target attributes
X = dataset[:,0:6]
y = target[:]
print X.shape
print y.shape
#print X
#print y
knn = KNeighborsClassifier(n_neighbors=1)
print knn
knn.fit(X,y)
result = knn.predict(X)
print metrics.accuracy_score(y, result)
knn = KNeighborsClassifier(n_neighbors=5)
print knn
knn.fit(X,y)
result = knn.predict(X)
print metrics.accuracy_score(y, result)
logreg = LogisticRegression()
print logreg
logreg.fit(X, y)
result = logreg.predict(X)
#every prediction is 0
print metrics.accuracy_score(y, result)
tshelley@tshelley-Ubuntu:~/Dev/Enterprise-Project$ python loadcsv.py
(700, 6)
(700,)
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=1, n_neighbors=1, p=2,
weights='uniform')
0.674285714286
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=1, n_neighbors=5, p=2,
weights='uniform')
0.675714285714
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False)
0.72
答案 0 :(得分:0)
我现在能看到的一个大问题是你在regression和classification之间感到困惑。那么你实际上要做什么,建立一个分类模型或回归模型,没有样本数据很难确定。
由于您正在使用scikit-learn,请尝试查看他们的cheat-sheet,以便更接近您所寻找的内容。
仅供参考,没有任何预处理,67%的分类也不错。