无法在Spark上运行TensorFlow

时间:2016-08-02 09:25:45

标签: python apache-spark tensorflow pyspark

我正在尝试让TensorFlow在我的Spark集群上运行,以使其并行运行。 首先,我尝试按原样使用demo

该演示在没有Spark的情况下运行良好,但在使用Spark时,我收到以下错误:

16/08/02 10:44:16 INFO DAGScheduler: Job 0 failed: collect at   /home/hdfs/tfspark.py:294, took 1.151383 s
Traceback (most recent call last):
  File "/home/hdfs/tfspark.py", line 294, in <module>
    local_labelled_images = labelled_images.collect()
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 771, in collect
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError16/08/02 10:44:17 INFO BlockManagerInfo: Removed broadcast_2_piece0 on localhost:45020 in memory (size: 6.4 KB, free: 419.5 MB)
16/08/02 10:44:17 INFO ContextCleaner: Cleaned accumulator 2
: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
    command = pickleSer._read_with_length(infile)
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
    return self.loads(obj)
  File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
    return pickle.loads(obj)
  File "/usr/lib/python2.7/site-packages/six.py", line 118, in __getattr__
    _module = self._resolve()
  File "/usr/lib/python2.7/site-packages/six.py", line 115, in _resolve
    return _import_module(self.mod)
  File "/usr/lib/python2.7/site-packages/six.py", line 118, in __getattr__
    _module = self._resolve()
  File "/usr/lib/python2.7/site-packages/six.py", line 115, in _resolve
    return _import_module(self.mod)
  File "/usr/lib/python2.7/site-packages/six.py", line 118, in __getattr__
    _module = self._resolve()
.
.
.
RuntimeError: maximum recursion depth exceeded

当我使用pyspark或直接使用spark-submit时,我收到同样的错误。

我试图将递归限制增加到50000(即使它可能不是根本原因),但它没有帮助。

由于错误是使用 包,我认为python 3可能会修复它,但我还没有尝试,因为它可能需要调整在我们的生产环境中(如果我们可以避免它会更好)。

python 3应该与pyspark更好地工作吗? (我知道它适用于TensorFlow)

如何使用python 2?

我使用python 2.7.5在RHEL 7.2上的HortonWorks集群中运行TensorFlow 0.9.0 Spark 1.6.1。

由于

更新

尝试使用python 3.5 - 获得相同的异常。显然升级到python 3并不是一种可能的解决方法。

1 个答案:

答案 0 :(得分:4)

我终于意识到根本原因是六个模块本身 - 它与spark存在一些兼容性问题,无论何时加载都存在问题。

因此,为了解决这个问题,我在演示中搜索了 包的所有用法,并用python 2中的等效模块替换它们(例如, six.moves.urllib.response 只是变成了 urllib2 )。当所有出现的六个被删除时,演示在Spark上运行完美。