在带字符串字段的spark中使用Decision Tree分类器的数据帧

时间:2017-02-22 16:38:04

标签: scala apache-spark dataframe spark-dataframe decision-tree

我设法让Decision Tree分类器适用于基于RDD的API,但现在我正在尝试切换到Spark中基于Dataframes的API。

我有这样的数据集(但有更多字段):

国家/地区,目的地,持续时间,标签

Belgium, France, 10, 0
Bosnia, USA, 120, 1
Germany, Spain, 30, 0

首先我将csv文件加载到数据框中:

val data = session.read
  .format("org.apache.spark.csv")
  .option("header", "true")
  .csv("/home/Datasets/data/dataset.csv")

然后我将字符串列转换为数字列

val stringColumns = Array("country", "destination")

val index_transformers = stringColumns.map(
  cname => new StringIndexer()
    .setInputCol(cname)
    .setOutputCol(s"${cname}_index")
)

然后我使用VectorAssembler将我的所有特征组合成一个单独的向量,如下所示:

val assembler = new VectorAssembler()
   .setInputCols(Array("country_index", "destination_index", "duration_index"))
   .setOutputCol("features")

我将数据分成训练和测试:

val Array(trainingData, testData) = data.randomSplit(Array(0.7, 0.3))

然后我创建了DecisionTree分类器

val dt = new DecisionTreeClassifier()
  .setLabelCol("label")
  .setFeaturesCol("features")

然后我使用管道进行所有转换

val pipeline = new Pipeline()
  .setStages(Array(index_transformers, assembler, dt))

我训练我的模型并将其用于预测:

val model = pipeline.fit(trainingData)

val predictions = model.transform(testData)

但我有些错误我不明白:

当我运行我的代码时,我有这个错误:

[error]  found   : Array[org.apache.spark.ml.feature.StringIndexer]
[error]  required: org.apache.spark.ml.PipelineStage
[error]           .setStages(Array(index_transformers, assembler,dt))

所以我做的是我在index_transformers val之后添加了一个管道,就在val汇编程序之前:

val index_pipeline = new Pipeline().setStages(index_transformers)
val index_model = index_pipeline.fit(data)
val df_indexed = index_model.transform(data)

我用作训练集和测试集,我的新df_indexed数据帧,我使用汇编程序和dt从管道中删除了index_transformers

val Array(trainingData, testData) = df_indexed.randomSplit(Array(0.7, 0.3))

val pipeline = new Pipeline()
  .setStages(Array(assembler,dt))

我收到了这个错误:

Exception in thread "main" java.lang.IllegalArgumentException: Data type StringType is not supported.

它基本上说我在String上使用VectorAssembler,而我告诉它在df_indexed上使用它,它现在有一个数字column_index,但它似乎没有在vectorAssembler中使用它,我只是不明白.. < / p>

谢谢

修改

现在我几乎设法让它发挥作用:

val data = session.read
  .format("org.apache.spark.csv")
  .option("header", "true")
  .csv("/home/hvfd8529/Datasets/dataOINIS/dataset.csv")

val stringColumns = Array("country_index", "destination_index", "duration_index")

val stringColumns_index = stringColumns.map(c => s"${c}_index")

val index_transformers = stringColumns.map(
  cname => new StringIndexer()
    .setInputCol(cname)
    .setOutputCol(s"${cname}_index")
)

val assembler  = new VectorAssembler()
    .setInputCols(stringColumns_index)
    .setOutputCol("features")

val labelIndexer = new StringIndexer()
  .setInputCol("label")
  .setOutputCol("indexedLabel")

val Array(trainingData, testData) = data.randomSplit(Array(0.7, 0.3))

// Train a DecisionTree model.
val dt = new DecisionTreeClassifier()
  .setLabelCol("indexedLabel")
  .setFeaturesCol("features")
  .setImpurity("entropy")
  .setMaxBins(1000)
  .setMaxDepth(15)

// Convert indexed labels back to original labels.
val labelConverter = new IndexToString()
  .setInputCol("prediction")
  .setOutputCol("predictedLabel")
  .setLabels(labelIndexer.labels())

val stages = index_transformers :+ assembler :+ labelIndexer :+ dt :+ labelConverter

val pipeline = new Pipeline()
  .setStages(stages)


// Train model. This also runs the indexers.
val model = pipeline.fit(trainingData)

// Make predictions.
val predictions = model.transform(testData)

// Select example rows to display.
predictions.select("predictedLabel", "label", "indexedFeatures").show(5)

// Select (prediction, true label) and compute test error.
val evaluator = new MulticlassClassificationEvaluator()
  .setLabelCol("indexedLabel")
  .setPredictionCol("prediction")
  .setMetricName("accuracy")
val accuracy = evaluator.evaluate(predictions)
println("accuracy = " + accuracy)

val treeModel = model.stages(2).asInstanceOf[DecisionTreeClassificationModel]
println("Learned classification tree model:\n" + treeModel.toDebugString)

除了现在我有一个错误说:

value labels is not a member of org.apache.spark.ml.feature.StringIndexer

我不明白,因为我正在关注spark doc上的示例:/

2 个答案:

答案 0 :(得分:0)

应该是:

val pipeline = new Pipeline()
  .setStages(index_transformers ++ Array(assembler, dt): Array[PipelineStage])

答案 1 :(得分:0)

我为第一个问题做了什么:

val stages = index_transformers :+ assembler :+ labelIndexer :+ rf :+ labelConverter

val pipeline = new Pipeline()
  .setStages(stages)

对于我的第二个标签问题,我需要使用像这样的.fit(数据)

val labelIndexer = new StringIndexer()
  .setInputCol("label_fraude")
  .setOutputCol("indexedLabel")
  .fit(data)