我想在IRIS数据集上写一些简单的分类,以获取召回率和精度得分,然后是youtube视频,但是当测试准确性时,它给了我100分。我对错误有一些假设,但不知道该怎么做。做。您能帮我扩展代码使其更好吗?以及如何为此版本的分类编写回忆功能?
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
import graphviz
from sklearn.tree import
export_graphviz
iris = load_iris()
x = iris.data #feature
y = iris.target
#prediction
tree_clf =DecisionTreeClassifier()
model = tree_clf.fit(x,y) #model_fitting dot_data = export_graphviz
(tree_clf,out_file=None,feature_names=iris.feature_names,class_names=iris.target_names,filled=True,rounded=True,special_characters=True)
graph=graphviz.Source(dot_data) graph.render("iris")
accuracy=tree_clf.score(x,y)
print(accuracy)
答案 0 :(得分:1)
要检查结果,可以使用sklearn.metrics
from sklearn.metrics import classification_report
print(classification_report(y, model.predict(x)))
precision recall f1-score support
0 1.00 1.00 1.00 50
1 1.00 1.00 1.00 50
2 1.00 1.00 1.00 50
accuracy 1.00 150
macro avg 1.00 1.00 1.00 150
weighted avg 1.00 1.00 1.00 150
如果对结果有疑问,请目视检查。
print(model.predict(x))
答案 1 :(得分:1)
您在机器学习中犯了一个根本性的错误-根据用于训练模型的数据评估模型。相反,您需要将数据分为两组-培训和测试。根据训练数据训练模型,并根据测试数据进行评估。参见https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
尝试这样的事情:
x_train, x_test, y_train, y_test = train_test_split(x, y)
model = tree_clf.fit(x_train,y_train)
accuracy=tree_clf.score(x_test, y_test)
要了解为什么会出现问题,请考虑“作弊”模型的极端情况,该模型只会记住输入数据并输出所记住的内容。在不了解任何内容的情况下,使用您的代码将获得100%的准确性。
答案 2 :(得分:0)
因此,我根据建议实施和更改,并实施了我为150个数据点创建的一些源代码(120个培训和30个测试),所以我的问题是我的分类报告实施是否正确? tnx
import pandas as pd
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
def accuracy(y_true,y_predict):
count=0;
for i in range(0,len(y_true)):
if y_true[i] == y_predict[i]:
count=count+1;
return(count*100*1.0/len(y_true));
#reading trainning data
train_data=pd.read_csv("iris_train_data.csv",header=0)
x_train=train_data.values[:,0:4];
y_train=train_data.values[:,4];
#training the classifier
clf=DecisionTreeClassifier(criterion= 'entropy');
clf.fit(x_train,y_train);
print('Depth of learnt tree is ',clf.tree_.max_depth)
#t=clf.get_n_leaves()
print('Number of leaf nodes in learnt tree is 9','\n')
#reading test data
test_data=pd.read_csv("iris_test_data.csv",header=0)
x_test=test_data.values[:,0:4];
y_test=test_data.values[:,4];
#Training accuracy and Test accuracy without pruning
print('Training accuracy of classifier is ',accuracy(y_train,clf.predict(x_train)))
print('Test accuracy using classifier is ',accuracy(y_test,clf.predict(x_test)),'\n')
import pandas as pd
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
def accuracy(y_true,y_predict):
count=0;
for i in range(0,len(y_true)):
if y_true[i] == y_predict[i]:
count=count+1;
return(count*100*1.0/len(y_true));
def pruning_by_max_leaf_nodes(t):
for i in range(1, t-1):
clfnxt1 = DecisionTreeClassifier(criterion= 'entropy',max_leaf_nodes=t-i);
clfnxt1.fit(x_train,y_train)
print('Max_leaf_nodes = ',t-i,'Test Accuracy = ',accuracy(y_test,clfnxt1.predict(x_test)))
return;
def pruning_by_max_depth(t):
for i in range(1, t):
clfnxt2 = DecisionTreeClassifier(criterion= 'entropy',max_depth=t-i);
clfnxt2.fit(x_train,y_train)
print('Max_depth = ',clfnxt2.tree_.max_depth,'Test Accuracy = ',accuracy(y_test,clfnxt2.predict(x_test)))
return;
#reading trainning data
train_data=pd.read_csv("iris_train_data.csv",header=0)
x_train=train_data.values[:,0:4];
y_train=train_data.values[:,4];
#training the classifier
clf=DecisionTreeClassifier(criterion= 'entropy');
clf.fit(x_train,y_train);
print('Depth of learnt tree is ',clf.tree_.max_depth)
#t=clf.get_n_leaves()
print('Number of leaf nodes in learnt tree is 9','\n')
#reading test data
test_data=pd.read_csv("iris_test_data.csv",header=0)
x_test=test_data.values[:,0:4];
y_test=test_data.values[:,4];
#Pruning by reducing max_depth
print('Pruning case1:By reducing the max_depth of the tree')
pruning_by_max_depth(clf.tree_.max_depth)
print('')
t=9;
#Pruning by reducing max_leaf_nodes
print('Pruning case2:By reducing the max_leaf_nodes of the tree')
pruning_by_max_leaf_nodes(t);
print(classification_report(y_test, clf.fit(x_train,y_train).predict(x_test)))