无法使用'party'包在R中实现决策树。怎么做?

时间:2014-04-22 20:36:01

标签: r tree data-mining decision-tree

我正在尝试使用“party”包

在R中构建决策树

我正在遵循http://www.rdatamining.com/examples/decision-tree

中提到的方法

他们使用“派对”包显示了决策树。

我的数据集类似于示例中显示的虹膜数据集。这是我的数据集副本的链接。 https://drive.google.com/file/d/0B6cqWmwsEk20TXQyMnVlbGppcTQ/edit?usp=sharing

这是我尝试过的代码。我使用read.csv命令加载数据并将其提供给dat3变量。

library(party)
> str(dat3)
'data.frame':   1000 obs. of  4 variables:
 $ Road_Type              : num  2 3 3 1 1 1 3 3 1 3 ...
 $ Light_Conditions       : num  2 3 3 3 3 3 3 3 3 3 ...
 $ Road_Surface_Conditions: num  1 2 2 2 2 2 2 2 2 2 ...
 $ Accident_Severity      : chr  "three" "three" "three" "three" ...
> dat3$Accident_Severity<-as.factor(dat3$Accident_Severity)
> str(dat3)
'data.frame':   1000 obs. of  4 variables:
 $ Road_Type              : num  2 3 3 1 1 1 3 3 1 3 ...
 $ Light_Conditions       : num  2 3 3 3 3 3 3 3 3 3 ...
 $ Road_Surface_Conditions: num  1 2 2 2 2 2 2 2 2 2 ...
 $ Accident_Severity      : Factor w/ 3 levels "one","three",..: 2 2 2 2 3 2 2 2 3 2 ...
> mytree<- ctree(Accident_Severity ~ Road_Type + Light_Conditions + Road_Surface_Conditions, data=dat3)
> print(mytree)

     Conditional inference tree with 1 terminal nodes

Response:  Accident_Severity 
Inputs:  Road_Type, Light_Conditions, Road_Surface_Conditions 
Number of observations:  1000 

1)*  weights = 1000 
> 

正如您所看到的,构建的树没有节点,当我以图形方式绘制此树时,结果也不是因为没有树构造的结果。我不确定我在这里做错了什么。

1 个答案:

答案 0 :(得分:1)

我认为数据中没有足够的信息可以在0.95的重要性水平上做任何事情。看一下表格分割:

> with( dat3, table(Accident_Severity, Light_Conditions, Road_Type))
, , Road_Type = 1

                 Light_Conditions
Accident_Severity   1   2   3
            one     0   2   4
            three   2 157 158
            two     0  14  35

, , Road_Type = 2

                 Light_Conditions
Accident_Severity   1   2   3
            one     0   0   0
            three   1  17  11
            two     0   0   0

, , Road_Type = 3

                 Light_Conditions
Accident_Severity   1   2   3
            one     0   2   2
            three   3 269 251
            two     0  38  34

所以我认为没有明显的分裂。该功能认为已经足够分裂。如果降低最小标准,则会得到拆分:

 mytree<- ctree(Accident_Severity ~ Road_Type + Light_Conditions + Road_Surface_Conditions, 
                  data=dat3, control=ctree_control(  mincriterion =0.50) )
 print(mytree)
#----------------------
     Conditional inference tree with 4 terminal nodes

Response:  Accident_Severity 
Inputs:  Road_Type, Light_Conditions, Road_Surface_Conditions 
Number of observations:  1000 

1) Light_Conditions <= 2; criterion = 0.653, statistic = 4.043
  2) Road_Surface_Conditions <= 1; criterion = 0.9, statistic = 6.742
    3)*  weights = 193 
  2) Road_Surface_Conditions > 1
    4)*  weights = 312 
1) Light_Conditions > 2
  5) Road_Type <= 1; criterion = 0.792, statistic = 5.187
    6)*  weights = 197 
  5) Road_Type > 1
    7)*  weights = 298 

plot(mytree)

enter image description here

如果在变量名称周围使用因子(),则它们作为非序数处理:

 mytree2 <- ctree(Accident_Severity ~ factor(Road_Type) + factor(Light_Conditions) + factor(Road_Surface_Conditions), 
                   data=dat3, control=ctree_control(  mincriterion =0.50) )
  print(mytree2)
#------------------------
     Conditional inference tree with 2 terminal nodes

Response:  Accident_Severity 
Inputs:  factor(Road_Type), factor(Light_Conditions), factor(Road_Surface_Conditions) 
Number of observations:  1000 

1) factor(Road_Type) == {1, 3}; criterion = 0.635, statistic = 6.913
  2)*  weights = 971 
1) factor(Road_Type) == {2}
  3)*  weights = 29 

enter image description here