mleap AttributeError:' Pipeline'对象没有属性' serializeToBundle'

时间:2017-09-18 20:36:15

标签: python pyspark mleap

我在执行mleap repository的示例代码时遇到问题。我希望在脚本中运行代码而不是jupyter笔记本(这是运行示例的方式)。我的脚本如下:

have = [[['v', 'e', 'r', 't'], 'A', 'B', 'C', 'D'],
        [['v', 'e', 'r', 't'], 'E', 'F', 'G', 'H']]

want = [['v', 'A', 'B', 'C', 'D'],
        ['e', 'A', 'B', 'C', 'D'],
        ['r', 'A', 'B', 'C', 'D'],
        ['t', 'A', 'B', 'C', 'D'],
        ['v', 'E', 'F', 'G', 'H'],
        ['e', 'E', 'F', 'G', 'H'],
        ['r', 'E', 'F', 'G', 'H'],
        ['t', 'E', 'F', 'G', 'H']]

执行################################################################################## # start a local spark session # https://spark.apache.org/docs/0.9.0/python-programming-guide.html ################################################################################## from pyspark import SparkContext, SparkConf conf = SparkConf() #set app name conf.set("spark.app.name", "train classifier") #Run Spark locally with as many worker threads as logical cores on your machine (cores X threads). conf.set("spark.master", "local[*]") #number of cores to use for the driver process (only in cluster mode) conf.set("spark.driver.cores", "1") #Limit of total size of serialized results of all partitions for each Spark action (e.g. collect) conf.set("spark.driver.maxResultSize", "1g") #Amount of memory to use for the driver process conf.set("spark.driver.memory", "1g") #Amount of memory to use per executor process (e.g. 2g, 8g). conf.set("spark.executor.memory", "2g") #pass configuration to the spark context object along with code dependencies sc = SparkContext(conf=conf) from pyspark.sql.session import SparkSession spark = SparkSession(sc) ################################################################################## import mleap.pyspark # # Imports MLeap serialization functionality for PySpark from mleap.pyspark.spark_support import SimpleSparkSerializer # Import standard PySpark Transformers and packages from pyspark.ml.feature import VectorAssembler, StandardScaler, OneHotEncoder, StringIndexer from pyspark.ml import Pipeline, PipelineModel from pyspark.sql import Row # Create a test data frame l = [('Alice', 1), ('Bob', 2)] rdd = sc.parallelize(l) Person = Row('name', 'age') person = rdd.map(lambda r: Person(*r)) df2 = spark.createDataFrame(person) df2.collect() # Build a very simple pipeline using two transformers string_indexer = StringIndexer(inputCol='name', outputCol='name_string_index') feature_assembler = VectorAssembler( inputCols=[string_indexer.getOutputCol()], outputCol="features") feature_pipeline = [string_indexer, feature_assembler] featurePipeline = Pipeline(stages=feature_pipeline) featurePipeline.fit(df2) featurePipeline.serializeToBundle("jar:file:/tmp/pyspark.example.zip") 时出现以下错误:

spark-submit script.py

任何帮助将不胜感激!我从pypy安装了mleap。

3 个答案:

答案 0 :(得分:0)

似乎你没有正确地按照这些步骤进行操作,http://mleap-docs.combust.ml/getting-started/py-spark.html它说明了

  

注意:导入mleap.pyspark需要在导入任何其他PySpark库之前进行。

因此,请尝试在SparkContext

之后导入mleap

答案 1 :(得分:0)

我在运行时附加了以下jar文件解决了这个问题:

spark-submit --packages ml.combust.mleap:mleap-spark_2.11:0.8.1  script.py

答案 2 :(得分:0)

请参阅Here

似乎MLeap还没有为Spark 2.3做好准备。如果您正在运行Spark 2.3,请尝试降级到2.2并重试。希望,这有帮助!