多处理/池化是否有益于Pyspark处理时间

时间:2018-09-17 21:56:32

标签: apache-spark pyspark

我们正在尝试评估多处理是否真的在Spark框架中受益,尤其是使用Pyspark。当前,此设置存在于单个主/从节点EMR群集上。

为此,我们的独立脚本可以很好地处理例如一天的交易文件。 我们希望在PARALLEL中运行相同的脚本多天。因此,可以预期的是,如果一天的数据需要5分钟才能处理,那么当并行运行2天的数据时,大约需要5到7分钟而不是10分钟才能完成处理。

但是,我们遇到了多个问题,对于诸如groupBy这样的数据框操作抛出了错误:-

Process Process-2:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/multiprocessing/process.py", line 267, in _bootstrap
    self.run()
  File "/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
    self._target(*self._args, **self._kwargs)
  File "/home/hadoop/./script/spotify/spt_gras_matching_mp.py", line 27, in process_daily_track_file
    isrc_upc_prod_no_df = a_gras_data_df.groupBy("isrc_cd", "upc_cd").agg(max("product_no")).withColumnRenamed("max(product_no)", "product_no")
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/dataframe.py", line 1268, in groupBy
    jgd = self._jdf.groupBy(self._jcols(*cols))
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/dataframe.py", line 998, in _jcols
    return self._jseq(cols, _to_java_column)
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/dataframe.py", line 985, in _jseq
    return _to_seq(self.sql_ctx._sc, cols, converter)
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/column.py", line 66, in _to_seq
    cols = [converter(c) for c in cols]
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/column.py", line 48, in _to_java_column
    jcol = _create_column_from_name(col)
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/column.py", line 41, in _create_column_from_name
    return sc._jvm.functions.col(name)
  File "/usr/lib/spark/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1559, in __getattr__
    raise Py4JError("{0} does not exist in the JVM".format(name))
Py4JError: functions does not exist in the JVM

在致力于解决上述问题之前,我们面临的基本问题是,并行处理是否受益于开发人员方面。是我们试图尝试的多余活动,而Spark可能正在为我们做这件事?

任何建议将不胜感激。

0 个答案:

没有答案