pyspark:按对象属性对RDD进行排序

时间:2017-03-20 23:08:40

标签: python-3.x sorting pyspark rdd

我有以下名为my_rdd的rdd,它看起来像:

[FreqSequence(sequence=[['John']], freq=18980), 
 FreqSequence(sequence=[['Mary']], freq=106), 
 FreqSequence(sequence=[['John-Mary']], freq=381), 
 FreqSequence(sequence=[['John-Ann']], freq=158), 
 FreqSequence(sequence=[['Ann']], freq=433)]

然后我尝试按如下方式对其进行排序:

new_rdd = my_rdd.sortBy(lambda x: x.freq)
new_rdd.take(5)

但出现以下错误:

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-15-94c1babd943f> in <module>()
      1 print(my_rdd.take(5))
      2 new_rdd = my_rdd.sortBy(lambda x: x.freq)
----> 3 new_rdd.take(5)

/usr/local/spark-latest/python/pyspark/rdd.py in take(self, num)
   1341 
   1342             p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
-> 1343             res = self.context.runJob(self, takeUpToNumLeft, p)
   1344 
   1345             items += res

/usr/local/spark-latest/python/pyspark/context.py in runJob(self, rdd, partitionFunc, partitions, allowLocal)
    963         # SparkContext#runJob.
    964         mappedRDD = rdd.mapPartitions(partitionFunc)
--> 965         port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
    966         return list(_load_from_socket(port, mappedRDD._jrdd_deserializer))
    967 

/usr/local/spark-latest/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134 
   1135         for temp_arg in temp_args:

/usr/local/spark-latest/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/usr/local/spark-latest/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    317                 raise Py4JJavaError(
    318                     "An error occurred while calling {0}{1}{2}.\n".
--> 319                     format(target_id, ".", name), value)
    320             else:
    321                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 65.0 failed 4 times, most recent failure: Lost task 0.3 in stage 65.0 (TID 115, ph-hdp-inv-dn01, executor 1): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/data/0/yarn/nm/usercache/phanalytics-test/appcache/application_1489740042194_0048/container_e20_1489740042194_0048_01_000002/pyspark.zip/pyspark/worker.py", line 163, in main
    func, profiler, deserializer, serializer = read_command(pickleSer, infile)
  File "/data/0/yarn/nm/usercache/phanalytics-test/appcache/application_1489740042194_0048/container_e20_1489740042194_0048_01_000002/pyspark.zip/pyspark/worker.py", line 54, in read_command
    command = serializer._read_with_length(file)
  File "/data/0/yarn/nm/usercache/phanalytics-test/appcache/application_1489740042194_0048/container_e20_1489740042194_0048_01_000002/pyspark.zip/pyspark/serializers.py", line 169, in _read_with_length
    return self.loads(obj)
  File "/data/0/yarn/nm/usercache/phanalytics-test/appcache/application_1489740042194_0048/container_e20_1489740042194_0048_01_000002/pyspark.zip/pyspark/serializers.py", line 431, in loads
    return pickle.loads(obj, encoding=encoding)
ImportError: No module named 'UserString'

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:390)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

知道这里有什么问题吗?谢谢!

1 个答案:

答案 0 :(得分:0)

您的代码是正确的。你的错误:

ImportError: No module named 'UserString'
引发

是因为UserString不再是Python 3.x中的模块,但它是集合模块的一部分。这表明您要么使用过时的PySpark版本,要么其中一个依赖项已过时。