无法将RDD转换为DataFrame(RDD有数百万行)

时间:2017-01-14 09:28:27

标签: python csv apache-spark pyspark

我正在使用Apache Spark 1.6.2

我有.csv数据,它包含大约800万行,我想将其转换为DataFrame

但是我必须首先将它转换为RDD来进行映射以获取我想要的数据(列)

映射RDD工作正常,但是当涉及将RDD转换为DataFrame时,Spark会抛出错误

Traceback (most recent call last):
  File "C:/Users/Dzaky/Project/TJ-source/source/201512/final1.py", line 38, in <module>
    result_iso = input_iso.map(extract_iso).toDF()
  File "c:\spark\python\lib\pyspark.zip\pyspark\sql\context.py", line 64, in toDF
  File "c:\spark\python\lib\pyspark.zip\pyspark\sql\context.py", line 423, in createDataFrame
  File "c:\spark\python\lib\pyspark.zip\pyspark\sql\context.py", line 310, in _createFromRDD
  File "c:\spark\python\lib\pyspark.zip\pyspark\sql\context.py", line 254, in _inferSchema
  File "c:\spark\python\lib\pyspark.zip\pyspark\rdd.py", line 1315, in first
  File "c:\spark\python\lib\pyspark.zip\pyspark\rdd.py", line 1297, in take
  File "c:\spark\python\lib\pyspark.zip\pyspark\context.py", line 939, in runJob
  File "c:\spark\python\lib\py4j-0.9-src.zip\py4j\java_gateway.py", line 813, in __call__
  File "c:\spark\python\lib\pyspark.zip\pyspark\sql\utils.py", line 45, in deco
  File "c:\spark\python\lib\py4j-0.9-src.zip\py4j\protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.net.SocketException: Connection reset by peer: socket write error

这些是我的代码:

def extract_iso(line):
    fields = line.split(',')
    return [fields[-2], fields[1]]

input_iso = sc.textFile("data.csv")
result_iso = input_iso.map(extract_iso).toDF()

data.csv有超过800万行,但是当我将行减去直到它只有&lt; 500行,程序运行正常

我不知道Spark是否有行限制或什么,有什么方法可以转换我的RDD吗?

或者我们是否可以像绘制RDD一样映射DataFrame?

  

其他信息:

     

数据混乱,每行的总列数通常不同   一个到另一个,这就是我需要先映射它的原因。   但是,我想要的数据总是在完全相同的索引[1]和[-2](第二列,第二列),这些列之间的总列不同于行到行

非常感谢你的答案:)

1 个答案:

答案 0 :(得分:4)

最可能的原因是Spark正在尝试识别新创建的数据帧的模式。尝试将RDD映射到DF的第二种方法 - 指定模式,并通过createDataFrame,例如:

>>> from pyspark.sql.types import *
>>> schema = StructType([StructField('a', StringType()),StructField('b', StringType())])
>>> df = sqlContext.createDataFrame(input_iso.map(extract_iso), schema)