使用spark ML转换数据框中的许多功能

时间:2017-07-11 07:40:32

标签: scala machine-learning apache-spark-mllib

我正在关注本教程https://mapr.com/blog/churn-prediction-sparkml/ 我意识到csv结构必须像这样用手写:

val schema = StructType(Array(
    StructField("state", StringType, true),
    StructField("len", IntegerType, true),
    StructField("acode", StringType, true),
    StructField("intlplan", StringType, true),
    StructField("vplan", StringType, true),
    StructField("numvmail", DoubleType, true),
    StructField("tdmins", DoubleType, true),
    StructField("tdcalls", DoubleType, true),
    StructField("tdcharge", DoubleType, true),
    StructField("temins", DoubleType, true),
    StructField("tecalls", DoubleType, true),
    StructField("techarge", DoubleType, true),
    StructField("tnmins", DoubleType, true),
    StructField("tncalls", DoubleType, true),
    StructField("tncharge", DoubleType, true),
    StructField("timins", DoubleType, true),
    StructField("ticalls", DoubleType, true),
    StructField("ticharge", DoubleType, true),
    StructField("numcs", DoubleType, true),
    StructField("churn", StringType, true)

但是我有一个包含335个功能的数据集,因此我不想全部编写它们...是否有一种简单的方法来检索它们并相应地定义模式?

1 个答案:

答案 0 :(得分:0)

我在这里找到了解决方案:https://dzone.com/articles/using-apache-spark-dataframes-for-processing-of-ta 这比我想象的要容易