Pyspark pyspark.rdd.PipelinedRDD不使用模型

时间:2017-06-07 06:59:40

标签: apache-spark pyspark apache-spark-sql

我无法将RDD对象传递给PySpark逻辑回归模型。我正在使用Spark 2.0.1。任何帮助将不胜感激..

>>> from pyspark import SparkContext, HiveContext
>>> from pyspark.mllib.regression import LabeledPoint
>>> from pyspark.mllib.classification import LogisticRegressionWithLBFGS
>>> from pyspark.mllib.util import MLUtils
>>>
>>> table_name = "api_model"
>>> target_col = "dv"
>>>
>>>
>>> hc = HiveContext(sc)
>>>
>>> # get the table from the hive context
... df = hc.table(table_name)
>>> df = df.select(target_col, *[col for col in df.columns if col != target_col])
>>>
>>> # map through the data to produce an rdd of labeled points
... rdd_of_labeled_points = df.rdd.map(lambda row: LabeledPoint(row[0], row[1:]))
>>> print (rdd_of_labeled_points.take(3))
[LabeledPoint(1.0, [0.0,2.520784472,0.0,0.0,0.0,2.004684436,2.000347299,0.0,2.228387043,2.228387043,0.0,0.0,0.0,0.0,0.0,0.0]), LabeledPoint(0.0, [2.857738033,0.0,0.0,2.619965104,0.0,2.004684436,2.000347299,0.0,2.228387043,2.228387043,0.0,0.0,0.0,0.0,0.0,0.0]), LabeledPoint(0.0, [2.857738033,0.0,2.061393767,0.0,0.0,2.004684436,0.0,0.0,2.228387043,2.228387043,0.0,0.0,0.0,0.0,0.0,0.0])]
>>>
>>> from pyspark.ml.classification import LogisticRegression
>>> lr = LogisticRegression(maxIter=10, regParam=0.3, elasticNetParam=0.8)
>>> lrModel = lr.fit(sc.parallelize(rdd_of_labeled_points))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/hdp/current/spark2-client/python/pyspark/context.py", line 432, in parallelize
    c = list(c)    # Make it a list so we can compute its length
TypeError: 'PipelinedRDD' object is not iterable

1 个答案:

答案 0 :(得分:2)

这是因为您在sc.parallelize上使用RDD。以下是错误

sc.parallelize(rdd_of_labeled_points)

您还混合了spark-mlspark-mllib

from pyspark.mllib.classification import LogisticRegressionWithLBFGS

from pyspark.ml.classification import LogisticRegression

lrModel = lr.fit(sc.parallelize(rdd_of_labeled_points))

在第一种情况下,如上所述,您需要使用RDD train模型,例如:

model = LinearRegressionWithSGD.train(rdd_of_labeled_points, iterations=100, step=0.00000001)

在第二种情况下,您需要将RDD转换为DataFrame以将其提供给您的模型。

我强烈建议您阅读官方文档。还有很多例子可以帮助你开始。

记住:

  • spark-mllib使用RDD。
  • spark-mll使用DataFrames。