如何使用SOM算法进行分类预测

时间:2017-07-14 08:58:52

标签: r classification prediction som

我想看看如果SOM算法可以用于分类预测。 我曾经在下面编码,但我发现分类结果远非正确。例如,在测试数据集中,我获得的不仅仅是训练目标变量中的3个值。如何创建与训练目标变量一致的预测模型?

library(kohonen)
    library(HDclassif)
    data(wine)
    set.seed(7)

    training <- sample(nrow(wine), 120)
    Xtraining <- scale(wine[training, ])
    Xtest <- scale(wine[-training, ],
                   center = attr(Xtraining, "scaled:center"),
                   scale = attr(Xtraining, "scaled:scale"))

    som.wine <- som(Xtraining, grid = somgrid(5, 5, "hexagonal"))


som.prediction$pred <- predict(som.wine, newdata = Xtest,
                          trainX = Xtraining,
                          trainY = factor(Xtraining$class))

结果:

$unit.classif

 [1]  7  7  1  7  1 11  6  2  2  7  7 12 11 11 12  2  7  7  7  1  2  7  2 16 20 24 25 16 13 17 23 22
[33] 24 18  8 22 17 16 22 18 22 22 18 23 22 18 18 13 10 14 15  4  4 14 14 15 15  4

1 个答案:

答案 0 :(得分:1)

这可能会有所帮助:

  • SOM是一种无监督的分类算法,因此您不应期望它在包含分类器标签的数据集上进行训练(如果您这样做,它将需要此信息才能工作,并且对于未标记的数据集将无用)
  • 这个想法是它会转变为#34;一个输入数字向量到网络单元号(尝试再次运行你的代码,每3个网格1,你将得到你期望的输出)
  • 然后,您需要将这些网络单元号码转换回您要查找的类别(这是您的代码中缺少的关键部分)

下面的可重复示例将输出经典分类错误。它包括一个实现选项,用于&#34;转换回&#34;原帖中缺少部分内容。

尽管如此,对于这个特定的数据集,该模型非常快速地适应:3个单位给出了最好的结果。

#Set and scale a training set (-1 to drop the classes)
data(wine)
set.seed(7)
training <- sample(nrow(wine), 120)
Xtraining <- scale(wine[training, -1])

#Scale a test set (-1 to drop the classes)
Xtest <- scale(wine[-training, -1],
               center = attr(Xtraining, "scaled:center"),
               scale = attr(Xtraining, "scaled:scale"))

#Set 2D grid resolution
#WARNING: it overfits pretty quickly
#Errors are 36% for 1 unit, 63% for 2, 93% for 3, 89% for 4
som_grid <- somgrid(xdim = 1, ydim=3, topo="hexagonal")

#Create a trained model
som_model <- som(Xtraining, som_grid)

#Make a prediction on test data
som.prediction <- predict(som_model, newdata = Xtest)

#Put together original classes and SOM classifications
error.df <- data.frame(real = wine[-training, 1],
                       predicted = som.prediction$unit.classif)

#Return the category number that has the strongest association with the unit
#number (0 stands for ambiguous)
switch <- sapply(unique(som_model$unit.classif), function(x, df){
  cat <- as.numeric(names(which.max(table(
    error.df[error.df$predicted==x,1]))))
  if(length(cat)<1){
    cat <- 0
  }
  return(c(x, cat))
}, df = data.frame(real = wine[training, 1], predicted = som_model$unit.classif))

#Translate units numbers into classes
error.df$corrected <- apply(error.df, MARGIN = 1, function(x, switch){
  cat <- switch[2, which(switch[1,] == x["predicted"])]
  if(length(cat)<1){
    cat <- 0
  }
  return(cat)
}, switch = switch)

#Compute a classification error
sum(error.df$corrected == error.df$real)/length(error.df$real)