Pyspark,决策树(Spark 2.0.0)

时间:2016-10-30 13:04:31

标签: apache-spark dataframe pyspark apache-spark-sql decision-tree

我是新来的火花(使用pyspark)。我尝试从here (link)运行决策树教程。我执行代码:

from pyspark.ml import Pipeline
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.feature import StringIndexer, VectorIndexer
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.mllib.util import MLUtils

# Load and parse the data file, converting it to a DataFrame.
data = MLUtils.loadLibSVMFile(sc, "data/mllib/sample_libsvm_data.txt").toDF()
labelIndexer = StringIndexer(inputCol="label", outputCol="indexedLabel").fit(data)

# Now this line fails
featureIndexer =\
    VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=4).fit(data)

我收到错误消息: IllegalArgumentException:u'requirement failed:列要素必须是org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7类型,但实际上是org.apache.spark.mllib.linalg.VectorUDT@f71b0bce。'

在搜索此错误时,我找到了一个答案:

use from pyspark.ml.linalg import Vectors, VectorUDT 
instead of 
from pyspark.mllib.linalg import Vectors, VectorUDT

这很奇怪,因为我还没有使用它。此外,将此导入添加到我的代码中什么也解决了,我仍然得到相同的错误。

我不太清楚如何调试这种情况。在查看原始数据时,我看到:

data.show()
+--------------------+-----+
|            features|label|
+--------------------+-----+
|(692,[127,128,129...|  0.0|
|(692,[158,159,160...|  1.0|
|(692,[124,125,126...|  1.0|
|(692,[152,153,154...|  1.0|

这看起来像一个列表,以'('。

开头

我不知道如何解决这个问题,甚至调试...... 关于我做错了什么的建议?

由于

1 个答案:

答案 0 :(得分:5)

问题的根源似乎是执行火花1.5.2。 spark 2.0.0上的示例(参见下面对spark 2.0示例的引用)。

spark.ml和spark.mllib

之间的区别

从Spark 2.0开始,spark.mllib包中基于RDD的API已进入维护模式。 Spark的主要机器学习API现在是spark.ml包中基于DataFrame的API。

可在此处找到更多详细信息:http://spark.apache.org/docs/latest/ml-guide.html

使用spark 2.0请尝试 Spark 2.0.0示例https://spark.apache.org/docs/2.0.0/mllib-decision-tree.html

from pyspark.mllib.tree import DecisionTree, DecisionTreeModel
from pyspark.mllib.util import MLUtils

# Load and parse the data file into an RDD of LabeledPoint.
data = MLUtils.loadLibSVMFile(sc, 'data/mllib/sample_libsvm_data.txt')
# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = data.randomSplit([0.7, 0.3])

# Train a DecisionTree model.
#  Empty categoricalFeaturesInfo indicates all features are continuous.
model = DecisionTree.trainClassifier(trainingData, numClasses=2, categoricalFeaturesInfo={},
                                     impurity='gini', maxDepth=5, maxBins=32)

# Evaluate model on test instances and compute test error
predictions = model.predict(testData.map(lambda x: x.features))
labelsAndPredictions = testData.map(lambda lp: lp.label).zip(predictions)
testErr = labelsAndPredictions.filter(lambda (v, p): v != p).count() / float(testData.count())
print('Test Error = ' + str(testErr))
print('Learned classification tree model:')
print(model.toDebugString())

# Save and load model
model.save(sc, "target/tmp/myDecisionTreeClassificationModel")
sameModel = DecisionTreeModel.load(sc, "target/tmp/myDecisionTreeClassificationModel")

在Spark repo中的“examples / src / main / python / mllib / decision_tree_classification_example.py”中查找完整的示例代码。