PySpark到PMML - "字段标签不存在"错误

时间:2017-06-27 11:20:55

标签: pyspark apache-spark-ml pmml

我是PySpark的新手,所以这可能是一个基本问题。我正在尝试使用 JPMML-SparkML 库将 PySpark 代码导出到 PMML 。 从JPMML-SparkML网站运行示例时:

from pyspark.ml import Pipeline
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.feature import RFormula

df = spark.read.csv("Iris.csv", header = True, inferSchema = True)
formula = RFormula(formula = "Species ~ .")
classifier = DecisionTreeClassifier()
pipeline = Pipeline(stages = [formula, classifier])
pipelineModel = pipeline.fit(df)

我收到错误Field "label" does not exist。从同一页面运行 Scala 代码时会弹出相同的错误。有谁知道这个标签字段是指什么?它似乎隐藏在后台执行的Spark代码中。我怀疑这个标签字段是否可以成为Iris数据集的一部分。

完整的错误消息:

Traceback (most recent call last): File "/usr/lib/spark/spark-2.1.1-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco return f(*a, **kw) File "/usr/lib/spark/spark-2.1.1-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value format(target_id, ".", name), value) py4j.protocol.Py4JJavaError: An error occurred while calling o48.fit. :
 java.lang.IllegalArgumentException: Field "label" does not exist. at
 org.apache.spark.sql.types.StructType$$anonfun$apply$1.apply(StructType.scala:264) at
 org.apache.spark.sql.types.StructType$$anonfun$apply$1.apply(StructType.scala:264) at
 scala.collection.MapLike$class.getOrElse(MapLike.scala:128) at scala.collection.AbstractMap.getOrElse(Map.scala:59) at
 org.apache.spark.sql.types.StructType.apply(StructType.scala:263) at 
 org.apache.spark.ml.util.SchemaUtils$.checkNumericType(SchemaUtils.scala:71) at 
 org.apache.spark.ml.PredictorParams$class.validateAndTransformSchema(Predictor.scala:53) at
 org.apache.spark.ml.classification.Classifier.org$apache$spark$ml$classification$ClassifierParams$$super$validateAndTransformSchema(Cla
 ssifier.scala:58) at org.apache.spark.ml.classification.ClassifierParams$class.validateAndTransformSchema(Classifier.scala:42) at org.apache.spark.ml.classification.ProbabilisticClassifier.org$apache$spark$ml$classification$ProbabilisticClassifierParams$$super$vali
 dateAndTransformSchema(ProbabilisticClassifier.scala:53) at org.apache.spark.ml.classification.ProbabilisticClassifierParams$class.validateAndTransformSchema(ProbabilisticClassifier.scala:37) at
 org.apache.spark.ml.classification.ProbabilisticClassifier.validateAndTransformSchema(ProbabilisticClassifier.scala:53) at
 org.apache.spark.ml.Predictor.transformSchema(Predictor.scala:122) at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:74) at org.apache.spark.ml.Predictor.fit(Predictor.scala:90) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
 java.lang.reflect.Method.invoke(Method.java:497) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:280) at
 py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:745)

谢谢,Michal

1 个答案:

答案 0 :(得分:1)

您需要提供要预测为标签的列。您可以将数据框中的列别名为“label”并使用分类器,也可以在分类器方法中将列作为labelCol参数提供。

classifier = DecisionTreeClassifier(labelCol='some prediction field')