我正在尝试使用随机森林分类器创建一个Spark ML管道来执行分类(不是回归),但是我收到一个错误,说我的训练集中的预测标签应该是double而不是整数。我按照这些页面的说明进行操作:
" Classification and regression - spark.ml" (apache.org)
" How to create correct data frame for classification in Spark ML" (stack overflow.com)
" Spark MLLib - Predict Store Sales with ML Pipelines" (sparktutorials.net)
我有一个包含以下列的Spark数据框:
scala> df.show(5)
+-------+----------+----------+---------+-----+
| userId|duration60|duration30|duration1|label|
+-------+----------+----------+---------+-----+
|user000| 11| 21| 35| 3|
|user001| 28| 41| 28| 4|
|user002| 17| 6| 8| 2|
|user003| 39| 29| 0| 1|
|user004| 26| 23| 25| 3|
+-------+----------+----------+---------+-----+
scala> df.printSchema()
root
|-- userId: string (nullable = true)
|-- duration60: integer (nullable = true)
|-- duration30: integer (nullable = true)
|-- duration1: integer (nullable = true)
|-- label: integer (nullable = true)
我使用功能列duration60,duration30和duration1来预测分类列标签。
然后我按照这样设置我的Spark脚本:import org.apache.log4j.Logger
import org.apache.log4j.Level
import org.apache.spark.sql.SQLContext
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.classification.{RandomForestClassificationModel, RandomForestClassifier}
import org.apache.spark.ml.{Pipeline, PipelineModel}
Logger.getLogger("org").setLevel(Level.ERROR)
Logger.getLogger("akka").setLevel(Level.ERROR)
val sqlContext = new SQLContext(sc)
val df = sqlContext.read.
format("com.databricks.spark.csv").
option("header", "true"). // Use first line of all files as header
option("inferSchema", "true"). // Automatically infer data types
load("/tmp/features.csv").
withColumnRenamed("satisfaction", "label").
select("userId", "duration60", "duration30", "duration1", "label")
val assembler = new VectorAssembler().
setInputCols(Array("duration60", "duration30", "duration1")).
setOutputCol("features")
val randomForest = new RandomForestClassifier().
setLabelCol("label").
setFeaturesCol("features").
setNumTrees(10)
var pipeline = new Pipeline().setStages(Array(assembler, randomForest))
var model = pipeline.fit(df);
转换后的数据框如下:
scala> assembler.transform(df).show(5)
+-------+----------+----------+---------+-----+----------------+
| userId|duration60|duration30|duration1|label| features|
+-------+----------+----------+---------+-----+----------------+
|user000| 11| 21| 35| 3|[11.0,21.0,35.0]|
|user001| 28| 41| 28| 4|[28.0,41.0,28.0]|
|user002| 17| 6| 8| 2| [17.0,6.0,8.0]|
|user003| 39| 29| 0| 1| [39.0,29.0,0.0]|
|user004| 26| 23| 25| 3|[26.0,23.0,25.0]|
+-------+----------+----------+---------+-----+----------------+
但是,最后一行会引发异常:
java.lang.IllegalArgumentException:要求失败:列标签 必须是DoubleType类型,但实际上是IntegerType。
这是什么意思,我该如何解决?
为什么label
列必须是双倍的?我在做预测,而不是回归,所以我认为字符串或整数是正确的。预测列的双精度值通常意味着回归。
答案 0 :(得分:5)
执行cast DoubleType
,因为这是算法所期望的类型。
import org.apache.spark.sql.types._
df.withColumn("label", 'label cast DoubleType)
因此,就在你的应用程序val df
之前,在序列的最后一行进行转换:
import org.apache.spark.sql.types._
val df = sqlContext.read.
format("com.databricks.spark.csv").
option("header", "true"). // Use first line of all files as header
option("inferSchema", "true"). // Automatically infer data types
load("/tmp/features.csv").
withColumnRenamed("satisfaction", "label").
select("userId", "duration60", "duration30", "duration1", "label")
.withColumn("label", 'label cast DoubleType) // <-- HERE
请注意,我已使用'label
符号(单引号'
后跟名称)来引用列label
(我可能也使用{{{ 1}}或$"label"
或col("label")
或df("label")
)。
答案 1 :(得分:1)
在pyspark
from pyspark.sql.types import DoubleType
df = df.withColumn("label", df.label.cast(DoubleType()))
答案 2 :(得分:0)
如果你正在使用pyspark并面临同样的问题
from pyspark.ml.feature import StringIndexer
stringIndexer = StringIndexer(inputCol="label", outputCol="newlabel")
model = stringIndexer.fit(df)
df = model.transform(df)
df.printSchema()
这是将标签列转换为'double'类型的一种方法。