如何在不创建无数中间数据框架的情况下应用多个索引器和编码器?

时间:2017-07-27 15:33:51

标签: scala apache-spark apache-spark-mllib

这是我的代码:

val workindexer = new StringIndexer().setInputCol("workclass").setOutputCol("workclassIndex")
val workencoder = new OneHotEncoder().setInputCol("workclassIndex").setOutputCol("workclassVec")

val educationindexer = new StringIndexer().setInputCol("education").setOutputCol("educationIndex")
val educationencoder = new OneHotEncoder().setInputCol("educationIndex").setOutputCol("educationVec")

val maritalindexer = new StringIndexer().setInputCol("marital_status").setOutputCol("maritalIndex")
val maritalencoder = new OneHotEncoder().setInputCol("maritalIndex").setOutputCol("maritalVec")

val occupationindexer = new StringIndexer().setInputCol("occupation").setOutputCol("occupationIndex")
val occupationencoder = new OneHotEncoder().setInputCol("occupationIndex").setOutputCol("occupationVec")

val relationindexer = new StringIndexer().setInputCol("relationship").setOutputCol("relationshipIndex")
val relationencoder = new OneHotEncoder().setInputCol("relationshipIndex").setOutputCol("relationshipVec")

val raceindexer = new StringIndexer().setInputCol("race").setOutputCol("raceIndex")
val raceencoder = new OneHotEncoder().setInputCol("raceIndex").setOutputCol("raceVec")

val sexindexer = new StringIndexer().setInputCol("sex").setOutputCol("sexIndex")
val sexencoder = new OneHotEncoder().setInputCol("sexIndex").setOutputCol("sexVec")

val nativeindexer = new StringIndexer().setInputCol("native_country").setOutputCol("native_countryIndex")
val nativeencoder = new OneHotEncoder().setInputCol("native_countryIndex").setOutputCol("native_countryVec")

val labelindexer = new StringIndexer().setInputCol("label").setOutputCol("labelIndex")

有没有办法在不创建无数中间数据帧的情况下应用所有这些编码器和索引器?

2 个答案:

答案 0 :(得分:1)

我会使用RFormula

import org.apache.spark.ml.feature.RFormula

val features = Seq("workclass", "education", 
   "marital_status", "occupation", "relationship", 
   "race", "sex", "native", "country")

val formula = new RFormula().setFormula(s"label ~ ${features.mkString(" + ")}")

它将应用与示例中使用的索引器相同的转换,并组合要素Vector

答案 1 :(得分:1)

使用名为ML Pipelines的Spark MLlib的功能:

  

ML Pipelines 提供了一套基于DataFrame构建的统一的高级API,可帮助用户创建和调整实用的机器学习流程。

使用ML Pipelines,您可以“链接”(或“管道”)“编码器和索引器,而无需创建无数的中间数据帧”

import org.apache.spark.ml._
val pipeline = new Pipeline().setStages(Array(workindexer, workencoder...))