我正在练习使用sklearn作为决策树,我使用的是打网球数据集
play_是目标列。
根据我的熵和信息增益的笔和纸计算,根节点应该是outlook_ column ,因为它具有最高的熵。
我在python中的当前代码:
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn import tree
from sklearn.preprocessing import LabelEncoder
import pandas as pd
import numpy as np
df = pd.read_csv('playTennis.csv')
lb = LabelEncoder()
df['outlook_'] = lb.fit_transform(df['outlook'])
df['temp_'] = lb.fit_transform(df['temp'] )
df['humidity_'] = lb.fit_transform(df['humidity'] )
df['windy_'] = lb.fit_transform(df['windy'] )
df['play_'] = lb.fit_transform(df['play'] )
X = df.iloc[:,5:9]
Y = df.iloc[:,9]
X_train, X_test , y_train,y_test = train_test_split(X, Y, test_size = 0.3, random_state = 100)
clf_entropy = DecisionTreeClassifier(criterion='entropy')
clf_entropy.fit(X_train.astype(int),y_train.astype(int))
y_pred_en = clf_entropy.predict(X_test)
print("Accuracy is :{0}".format(accuracy_score(y_test.astype(int),y_pred_en) * 100))
答案 0 :(得分:0)
我的猜测是测试和火车分裂的发生方式是湿度分割最终会获得比前景更好的信息增益。你做过笔了吗?基于训练集或基于整个数据集的纸张计算?