将字典rdd转换为df

时间:2017-06-06 13:58:40

标签: pyspark spark-dataframe

如何在dyspark

中将dict的流水线rdd制作成数据帧
[{'ACARS 20170507/20170506085012209001.rcv': 'QU SOUTA8X\r\n.BJSXCXA 060849\r\nM12\r\nFI CX731/AN B-LAN\r\nDT BJS HKG 060849 M63A\r\n-  OFF,V01,CX 731 20170506 1,VHHH,OMDB,0833,0849,----,  600', 'ACARS 20170507/20170502020906017001.rcv': 'QU SOUTA8X\r\n.BJSXCXA 020209\r\nM12\r\nFI KA876/AN B-LAB\r\nDT BJS HKG 020209 M11A\r\n-  OFF,V01,KA 876 20170502 1,VHHH,ZSPD,0149,0208,----,  294', 'ACARS 20170507/20170505050124358002.rcv': 'QU SOUTA8X\r\n.BKKXCXA 050501\r\nCFD\r\nFI CX690/AN B-LAJ\r\nDT BKK XSP 050501 C10A\r\n-  .1/WRN/DBN17D/WN1705050500  261707002SMOKE LAVATORY DET FAULT'}]

3 个答案:

答案 0 :(得分:2)

以下代码段应该可以使用

>>> from pyspark.sql import Row
>>>
>>> data = [{'foo': 'bar', 'hello': 'world'}]
>>> rdd = spark.sparkContext.parallelize(data)
>>> df = rdd.map(lambda x: Row(**x)).toDF()

>>> df.show()
+---+-----+
|foo|hello|
+---+-----+
|bar|world|
+---+-----+

答案 1 :(得分:0)

开始:

>>> a = [{'ACARS 20170507/20170506085012209001.rcv': 'QU SOUTA8X\r\n.BJSXCXA 060849\r\nM12\r\nFI CX731/AN B-LAN\r\nDT BJS HKG 060849 M63A\r\n-  OFF,V01,CX 731 20170506 1,VHHH,OMDB,0833,0849,----,  600', 'ACARS 20170507/20170502020906017001.rcv': 'QU SOUTA8X\r\n.BJSXCXA 020209\r\nM12\r\nFI KA876/AN B-LAB\r\nDT BJS HKG 020209 M11A\r\n-  OFF,V01,KA 876 20170502 1,VHHH,ZSPD,0149,0208,----,  294', 'ACARS 20170507/20170505050124358002.rcv': 'QU SOUTA8X\r\n.BKKXCXA 050501\r\nCFD\r\nFI CX690/AN B-LAJ\r\nDT BKK XSP 050501 C10A\r\n-  .1/WRN/DBN17D/WN1705050500  261707002SMOKE LAVATORY DET FAULT'}]
>>> rdd = sc.parallelize(a)

使用键获取rdd:

>>> rdd_k = rdd.flatMap(lambda x: x.keys())
>>> rdd_k.take(3)
['ACARS 20170507/20170506085012209001.rcv', 'ACARS 20170507/20170505050124358002.rcv', 'ACARS 20170507/20170502020906017001.rcv']

获取值为:

的rdd
>>> rdd_v = rdd.flatMap(lambda x: x.values())
>>> rdd_v.take(3)
['QU SOUTA8X\r\n.BJSXCXA 060849\r\nM12\r\nFI CX731/AN B-LAN\r\nDT BJS HKG 060849 M63A\r\n-  OFF,V01,CX 731 20170506 1,VHHH,OMDB,0833,0849,----,  600', 'QU SOUTA8X\r\n.BKKXCXA 050501\r\nCFD\r\nFI CX690/AN B-LAJ\r\nDT BKK XSP 050501 C10A\r\n-  .1/WRN/DBN17D/WN1705050500  261707002SMOKE LAVATORY DET FAULT', 'QU SOUTA8X\r\n.BJSXCXA 020209\r\nM12\r\nFI KA876/AN B-LAB\r\nDT BJS HKG 020209 M11A\r\n-  OFF,V01,KA 876 20170502 1,VHHH,ZSPD,0149,0208,----,  294']

拉出两个rdds,你将有一个元组的rdd,每个元组是你的起始字典的一对(键,值):

>>> newRdd = rdd_k.zip(rdd_v)
>>> newRdd.first()
('ACARS 20170507/20170506085012209001.rcv', 'QU SOUTA8X\r\n.BJSXCXA 060849\r\nM12\r\nFI CX731/AN B-LAN\r\nDT BJS HKG 060849 M63A\r\n-  OFF,V01,CX 731 20170506 1,VHHH,OMDB,0833,0849,----,  600')

转换为dataframe:

>>> df = newRdd.toDF()
>>> df.show()
+--------------------+--------------------+
|                  _1|                  _2|
+--------------------+--------------------+
|ACARS 20170507/20...|QU SOUTA8X
.BJSX...|
|ACARS 20170507/20...|QU SOUTA8X
.BKKX...|
|ACARS 20170507/20...|QU SOUTA8X
.BJSX...|
+--------------------+--------------------+

答案 2 :(得分:0)

首先创建一个适用于一个字典的函数,然后将其应用于字典的RDD。

helpin = [{'ACARS 20170507/20170506085012209001.rcv': 'QU SOUTA8X\r\n.BJSXCXA 060849\r\nM12\r\nFI CX731/AN B-LAN\r\nDT BJS HKG 060849 M63A\r\n-  OFF,V01,CX 731 20170506 1,VHHH,OMDB,0833,0849,----,  600', 'ACARS 20170507/20170502020906017001.rcv': 'QU SOUTA8X\r\n.BJSXCXA 020209\r\nM12\r\nFI KA876/AN B-LAB\r\nDT BJS HKG 020209 M11A\r\n-  OFF,V01,KA 876 20170502 1,VHHH,ZSPD,0149,0208,----,  294', 'ACARS 20170507/20170505050124358002.rcv': 'QU SOUTA8X\r\n.BKKXCXA 050501\r\nCFD\r\nFI CX690/AN B-LAJ\r\nDT BKK XSP 050501 C10A\r\n-  .1/WRN/DBN17D/WN1705050500  261707002SMOKE LAVATORY DET FAULT'}]

from pyspark.sql import SparkSession # convert rdd to dataframe
spark = SparkSession(sc)

def helpfunc(dicin):
    dicout = sc.parallelize(dicin).map(lambda x:(x,dicin[x])).toDF()
    return (dicout)


helpdic = helpin[0]
helpfunc(helpdic).show()

当实际帮助是rdd时,请使用:

helpin.map(lambda x:helpfunc(x))