在pySpark上运行的Python DEAP无法调用creator函数

时间:2018-03-02 23:53:18

标签: python apache-spark pyspark deap

问题

我正在尝试使用pySpark在Jupyter笔记本中运行Python DEAP遗传算法进行并行化。通过一些研究,我理解其实质是使用工具箱注册基于火花的地图功能,以允许火花进行适应性评估(下面的代码)。

我的问题是,如何公开SparkContext的创建者函数? 与SCOOP和多处理相比,Spark是否需要任何特殊处理?

代码

我正在运行的代码的本质如下,已经创建了SparContext(sc):

import deap as ea
from deap import creator, base, tools, algorithms

ea.creator.create("FitnessMin", ea.base.Fitness, weights=(-1.0,))
ea.creator.create("Individual", list, fitness=ea.creator.FitnessMin)

toolbox = ea.base.Toolbox()
def sparkMap(algorithm, *population):
    return sc.parallelize(population).map(algorithm)

toolbox.register("map", sparkMap) #Set DEAP to run on a machine cluster using Spark
hallOfFame = tools.HallOfFame(2)
population = toolbox.population(n=POPULATION_SIZE)
tools.initIterate(list, partial(sample, range(MAX_NUMBER_OF_CLUSTERS), MAX_NUMBER_OF_CLUSTERS))

gen = 0;
while gen < NUMBER_OF_GENERATOINS:
    # Update population
    population = toolbox.select(population, k=len(population))
    population = [toolbox.clone(ind) for ind in population]
    population = ea.algorithms.varAnd(population, toolbox, cxpb=cxpb, mutpb=mutpb, )

    offspring = [individual for individual in population if not individual.fitness.valid]
    fits = toolbox.map(toolbox.evaluate, offspring).collect()

    for fit, ind in zip(fits, offspring):
        ind.fitness.values = fit

    #Update hall of fame to ensure we always know the best found solution
    hallOfFame.update(offspring)    

    gen += 1

best = hallOfFame[0]

但是,这会导致错误说明:

  

属性错误:无法获得属性&#39;个人&#39;在模块&#39; deap.creator&#39;来自&g; / / / / / / / / / / / / / / / / / / / / / / / / / /

我的理解是,对于其他并行化设置,如SCOOP和Python并行,deap.creator方法必须是全局范围的一部分。因为我在JUpyter笔记本中工作,所以下面的代码就是这种情况。 还有&#34;%谁&#34;表明,在许多其他人中,这些都列在全球范围内:

  

创建者ea sparkMap工具箱

错误消息

完整的错误消息:

--------------------------------------------------------------------------- Py4JJavaError                             Traceback (most recent call last) <ipython-input-45-49c0cc9cf0b4> in <module>()
     28 
     29     offspring = [individual for individual in population if not individual.fitness.valid]
---> 30     fits = toolbox.map(toolbox.evaluate, offspring).collect()
     31     #print('---------------------------------------------------------------------')
     32     #print('fits',fits)

/usr/local/src/spark21master/spark/python/pyspark/rdd.py in collect(self)
    806         """
    807         with SCCallSiteSync(self.context) as css:
--> 808             port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    809         return list(_load_from_socket(port, self._jrdd_deserializer))
    810 

/usr/local/src/spark21master/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)    1131         answer = self.gateway_client.send_command(command)    1132         return_value
= get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)    1134     1135         for temp_arg in temp_args:

/usr/local/src/spark21master/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/usr/local/src/spark21master/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    317                 raise Py4JJavaError(
    318                     "An error occurred while calling {0}{1}{2}.\n".
--> 319                     format(target_id, ".", name), value)
    320             else:
    321                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1.0 failed 10 times, most recent failure: Lost task
1.9 in stage 1.0 (TID 21, yp-spark-dal09-env5-0024, executor 6679c417-036c-45c6-9b7e-92e96c9751eb): org.apache.spark.api.python.PythonException: Traceback (most recent call last):   File "/usr/local/src/spark21master/spark-2.1.2-bin-2.7.3/python/lib/pyspark.zip/pyspark/worker.py", line 171, in main
    process()   File "/usr/local/src/spark21master/spark-2.1.2-bin-2.7.3/python/lib/pyspark.zip/pyspark/worker.py", line 166, in process
    serializer.dump_stream(func(split_index, iterator), outfile)   File "/usr/local/src/spark21master/spark-2.1.2-bin-2.7.3/python/lib/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
    vs = list(itertools.islice(iterator, batch))   File "/usr/local/src/spark21master/spark-2.1.2-bin-2.7.3/python/lib/pyspark.zip/pyspark/serializers.py", line 144, in load_stream
    yield self._read_with_length(stream)   File "/usr/local/src/spark21master/spark-2.1.2-bin-2.7.3/python/lib/pyspark.zip/pyspark/serializers.py", line 169, in _read_with_length
    return self.loads(obj)   File "/usr/local/src/spark21master/spark-2.1.2-bin-2.7.3/python/lib/pyspark.zip/pyspark/serializers.py", line 455, in loads
    return pickle.loads(obj, encoding=encoding) AttributeError: Can't get attribute 'Individual' on <module 'deap.creator' from '/gpfs/fs01/user/s093-7b1ca9741d3405-545a66b5b986/.local/lib/python3.5/site-packages/deap/creator.py'>

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)   at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)     at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:326)  at org.apache.spark.rdd.RDD.iterator(RDD.scala:290)     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)   at org.apache.spark.scheduler.Task.run(Task.scala:99)   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:326)    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)  at java.lang.Thread.run(Thread.java:785)

Driver stacktrace:  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1442)    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1430)     at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1429)     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)   at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1429)  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:803)     at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:803)     at scala.Option.foreach(Option.scala:257)   at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:803)  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1657)     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1612)   at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1601)   at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)  at java.lang.Thread.getStackTrace(Thread.java:1117)     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:629)   at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941)    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1954)    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1967)    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1981)    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:956)     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)   at org.apache.spark.rdd.RDD.withScope(RDD.scala:381)    at org.apache.spark.rdd.RDD.collect(RDD.scala:955)  at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453)  at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)    at java.lang.reflect.Method.invoke(Method.java:507)     at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)     at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)   at py4j.Gateway.invoke(Gateway.java:280)    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)     at py4j.commands.CallCommand.execute(CallCommand.java:79)   at py4j.GatewayConnection.run(GatewayConnection.java:214)   at java.lang.Thread.run(Thread.java:785) Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):   File "/usr/local/src/spark21master/spark-2.1.2-bin-2.7.3/python/lib/pyspark.zip/pyspark/worker.py", line 171, in main
    process()   File "/usr/local/src/spark21master/spark-2.1.2-bin-2.7.3/python/lib/pyspark.zip/pyspark/worker.py", line 166, in process
    serializer.dump_stream(func(split_index, iterator), outfile)   File "/usr/local/src/spark21master/spark-2.1.2-bin-2.7.3/python/lib/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
    vs = list(itertools.islice(iterator, batch))   File "/usr/local/src/spark21master/spark-2.1.2-bin-2.7.3/python/lib/pyspark.zip/pyspark/serializers.py", line 144, in load_stream
    yield self._read_with_length(stream)   File "/usr/local/src/spark21master/spark-2.1.2-bin-2.7.3/python/lib/pyspark.zip/pyspark/serializers.py", line 169, in _read_with_length
    return self.loads(obj)   File "/usr/local/src/spark21master/spark-2.1.2-bin-2.7.3/python/lib/pyspark.zip/pyspark/serializers.py", line 455, in loads
    return pickle.loads(obj, encoding=encoding) AttributeError: Can't get attribute 'Individual' on <module 'deap.creator' from '/gpfs/fs01/user/s093-7b1ca9741d3405-545a66b5b986/.local/lib/python3.5/site-packages/deap/creator.py'>

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)   at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)     at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:326)  at org.apache.spark.rdd.RDD.iterator(RDD.scala:290)     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)   at org.apache.spark.scheduler.Task.run(Task.scala:99)   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:326)    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)  ... 1 more

1 个答案:

答案 0 :(得分:0)

这是一个已知问题,请参见https://github.com/DEAP/deap/issues/268
他们提到有一个拉取请求(https://github.com/DEAP/deap/pull/76),似乎固定的代码/分支来自派生的仓库。

听起来,如果您使用该代码重建软件包,它将解决此问题。