保存/读取PySpark saveAsSequenceFile时出错

时间:2016-03-19 00:10:46

标签: python apache-spark pyspark

我正在尝试使用PySpark saveAsSequenceFile在HDFS上保存和读取文件并遇到问题。 将欣赏建议或推荐

由于

我在Cloudera CDH 5.4集群上使用python 2.6.6

进行以下操作
 $ pyspark
 >>> c = [(16777216, ('16777216', '16777471', 'oceania', 'australia')),
         (16777472, ('16777472', '16778239', 'asia', 'china')),
         (16778240, ('16778240', '16779007', 'oceania', 'australia')),
         (16779008, ('16779008', '16779263', 'oceania', 'australia'))]
 >>> rdd = sc.parallelize(c)
 >>> rdd.saveAsSequenceFile('hdfs:/user/neustar/junk')

我收到以下错误:

16/03/19 00:07:11 INFO SparkContext: Starting job: first at SerDeUtil.scala:229
.
.
.
16/03/19 00:07:12 INFO DAGScheduler: Stage 24 (first at PythonRDD.scala:617) 
finished in 0.054 s
16/03/19 00:07:12 INFO DAGScheduler: Job 24 finished: first at PythonRDD.scala:617, took 0.070596 s
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/pyspark/rdd.py", line 1239, in saveAsSequenceFile
    path, compressionCodecClass)
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.saveAsSequenceFile.
: org.apache.spark.SparkException: Data of type [Ljava.lang.Object; cannot be used
    at org.apache.spark.api.python.JavaToWritableConverter.org$apache$spark$api$python$JavaToWritableConverter$$convertToWritable(PythonHadoopUtil.scala:141)
    at org.apache.spark.api.python.JavaToWritableConverter.convert(PythonHadoopUtil.scala:148)
    at org.apache.spark.api.python.JavaToWritableConverter.convert(PythonHadoopUtil.scala:118)
    at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$inferKeyValueTypes(PythonRDD.scala:620)
    at org.apache.spark.api.python.PythonRDD$$anonfun$7.apply(PythonRDD.scala:689)
    at org.apache.spark.api.python.PythonRDD$$anonfun$7.apply(PythonRDD.scala:689)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.api.python.PythonRDD$.saveAsHadoopFile(PythonRDD.scala:688)
    at org.apache.spark.api.python.PythonRDD$.saveAsSequenceFile(PythonRDD.scala:662)
    at org.apache.spark.api.python.PythonRDD.saveAsSequenceFile(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:207)
    at java.lang.Thread.run(Thread.java:745)

0 个答案:

没有答案