spark:在地图中使用查找

时间:2016-03-28 05:12:30

标签: apache-spark pyspark

我试图以下面描述的方式(使用PySpark)在地图内部(在spark中)使用查找,并收到错误。

这是Spark不允许做的事吗?

>>> rdd1 = sc.parallelize([(1,'a'),(2,'b'),(3,'c'),(4,'d')]).sortByKey()
>>> rdd2 = sc.parallelize([2,4])
>>> rdd = rdd2.map(lambda x: (x, rdd1.lookup(x)))
>>> rdd.collect()

这样做的原因是,在我正在处理的实际问题中,rdd1是巨大的。所以像使用像collectAsMap这样的方法将其转换为字典的解决方案是无效的。

rdd1和rdd2都非常大,所以加入它们也非常慢

由于

错误:

16/03/28 05:02:28 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
16/03/28 05:02:28 INFO DAGScheduler: Stage 1 (sortByKey at <stdin>:1) finished in 0.148 s
16/03/28 05:02:28 INFO DAGScheduler: Job 1 finished: sortByKey at <stdin>:1, took 0.189587 s
>>> rdd2 = sc.parallelize([2,4])
>>> rdd = rdd2.map(lambda x: (x, rdd1.lookup(x)))
>>> rdd.collect()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/rdd.py", line 676, in collect
    bytesInJava = self._jrdd.collect().iterator()
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/rdd.py", line 2107, in _jrdd
    pickled_command = ser.dumps(command)
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/serializers.py", line 402, in dumps
    return cloudpickle.dumps(obj, 2)
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/cloudpickle.py", line 816, in dumps
    cp.dump(obj)
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/cloudpickle.py", line 133, in dump
    return pickle.Pickler.dump(self, obj)
  File "/usr/lib64/python2.6/pickle.py", line 224, in dump
    self.save(obj)
  File "/usr/lib64/python2.6/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/usr/lib64/python2.6/pickle.py", line 562, in save_tuple
    save(element)
  File "/usr/lib64/python2.6/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/cloudpickle.py", line 254, in save_function
    self.save_function_tuple(obj, [themodule])
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/cloudpickle.py", line 304, in save_function_tuple
    save((code, closure, base_globals))
  File "/usr/lib64/python2.6/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/usr/lib64/python2.6/pickle.py", line 548, in save_tuple
    save(element)
  File "/usr/lib64/python2.6/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/usr/lib64/python2.6/pickle.py", line 600, in save_list
    self._batch_appends(iter(obj))
  File "/usr/lib64/python2.6/pickle.py", line 636, in _batch_appends
    save(tmp[0])
  File "/usr/lib64/python2.6/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/cloudpickle.py", line 249, in save_function
    self.save_function_tuple(obj, modList)
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/cloudpickle.py", line 309, in save_function_tuple
    save(f_globals)
  File "/usr/lib64/python2.6/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/cloudpickle.py", line 174, in save_dict
    pickle.Pickler.save_dict(self, obj)
  File "/usr/lib64/python2.6/pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "/usr/lib64/python2.6/pickle.py", line 686, in _batch_setitems
    save(v)
  File "/usr/lib64/python2.6/pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/cloudpickle.py", line 650, in save_reduce
    save(state)
  File "/usr/lib64/python2.6/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/cloudpickle.py", line 174, in save_dict
    pickle.Pickler.save_dict(self, obj)
  File "/usr/lib64/python2.6/pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "/usr/lib64/python2.6/pickle.py", line 681, in _batch_setitems
    save(v)
  File "/usr/lib64/python2.6/pickle.py", line 306, in save
    rv = reduce(self.proto)
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 304, in get_return_value
py4j.protocol.Py4JError: An error occurred while calling o51.__getnewargs__. Trace:
py4j.Py4JException: Method __getnewargs__([]) does not exist
    at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:333)
    at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:342)
    at py4j.Gateway.invoke(Gateway.java:252)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:207)
    at java.lang.Thread.run(Thread.java:745)


>>> 

2 个答案:

答案 0 :(得分:1)

  

这是Spark不允许做的事吗?

是的。 Spark不支持嵌套操作和转换。由于您已经涵盖了join和局部变量,所以剩下的唯一选择是使用外部系统(例如数据库)进行查找。

答案 1 :(得分:0)

RDD不能用于RDD的过程中。例如:
我们有两个rdd:rdd1rdd2 你可以这样做:rdd1.map(......)
但你不能这样做:rdd1.map(.....rdd2....)
所以,当你做一些复杂的动作时,尝试联合/加入这些。