在pyspark中将行转换为RDD

时间:2017-03-14 02:03:55

标签: lambda pyspark

我想使用以下数据集生成类似this图像的文件。 结果是使用以下行过滤数据帧的结果:

df1 = df0.rdd.filter(lambda x: 'VS' in x.MeterCategory)
  .map(lambda x: [x.vId,x.Meters]).take(2)

行的数据集:

[ABCD1234, Row(0=6.0, 10=None, 100=None, 1000=None, 10000=None, 1000000=None, 100000000=None, 10235=None, 1024=None)]
[WXYZ9999,Row(0=40.0, 10=None, 100=None, 1000=None, 10000=None, 1000000=None, 100000000=None, 10235=None, 1024=None)]

https://i.stack.imgur.com/8nUkH.png

我一直在尝试一些方法,我在这个论坛中找到了,但我无法达到结果。 感谢

2 个答案:

答案 0 :(得分:0)

使用您的示例数据:

df = sc.parallelize([('ABCD1234',6.0,'None','None','None','None','None','None','None','None'),
                     ('WXYZ9999',40.0,'None','None','None','None','None','None','None','None')]).toDF(['Id','0','10','100','1000','10000','1000000','100000000','10235','1024'])

您可以使用以下代码段来转动数据:

from pyspark.sql import functions as F
from pyspark.sql.types import StringType

kvp = F.explode(F.array([F.struct(F.lit(c).cast(StringType()).alias("Key"), F.col(c).cast(StringType()).alias("Value")) for c in df.columns if c!='Id'])).alias("kvp")
df_pivoted = df.select(['Id'] + [kvp]).select(['Id'] + ["kvp.Key", "kvp.Value"])
df_pivoted.show()

您可以通过将Dataframe转换为pandas来将数据输出到单个CSV:

df_pivoted.toPandas().to_csv('e:/output.csv',index=False,header = 'true', sep='|')

这给出了输出:

Id|Key|Value
ABCD1234|0|6.0
ABCD1234|10|None
ABCD1234|100|None
ABCD1234|1000|None
ABCD1234|10000|None
ABCD1234|1000000|None
ABCD1234|100000000|None
ABCD1234|10235|None
ABCD1234|1024|None
WXYZ9999|0|40.0
WXYZ9999|10|None
WXYZ9999|100|None
WXYZ9999|1000|None
WXYZ9999|10000|None
WXYZ9999|1000000|None
WXYZ9999|100000000|None
WXYZ9999|10235|None
WXYZ9999|1024|None

答案 1 :(得分:0)

看看这个。

首先请注意,你所指的是df1是RDD而不是数据帧

您可以使用您提到的数据集创建该RDD,如下所示。

请注意,我使用'_'作为列名的前缀,因为纯数字不能直接用作列名。

>>> from pyspark.sql import Row

>>> row1 = Row(_0=6.0, _10=None, _100=None, _1000=None, _10000=None, _1000000=None, 
           _100000000=None, _10235=None, _1024=None)
>>> row2 = Row(_0=40.0, _10=None, _100=None, _1000=None, _10000=None, _1000000=None,
           _100000000=None, _10235=None, _1024=None)

>>> yourStartDataset = sc.parallelize([
                                         ['ABCD1234',row1],
                                         ['WXYZ9999',row2]
                                      ])

现在您的数据集看起来像这样

>>> yourStartDataset.take(2)

[['ABCD1234',
  Row(_0=6.0, _10=None, _100=None, _1000=None, _10000=None, _1000000=None, _100000000=None, _10235=None, _1024=None)],
 ['WXYZ9999',
  Row(_0=40.0, _10=None, _100=None, _1000=None, _10000=None, _1000000=None, _100000000=None, _10235=None, _1024=None)]]

现在,以下行将完成魔术

>>> yourStartDataset.flatMapValues(lambda v: v.asDict().items()).map(lambda (a, (b, c)): (a, b, c)).collect()

[('ABCD1234', '_1000000', None),
 ('ABCD1234', '_100000000', None),
 ('ABCD1234', '_100', None),
 ('ABCD1234', '_10000', None),
 ('ABCD1234', '_0', 6.0),
 ('ABCD1234', '_1000', None),
 ('ABCD1234', '_10', None),
 ('ABCD1234', '_10235', None),
 ('ABCD1234', '_1024', None),
 ('WXYZ9999', '_1000000', None),
 ('WXYZ9999', '_100000000', None),
 ('WXYZ9999', '_100', None),
 ('WXYZ9999', '_10000', None),
 ('WXYZ9999', '_0', 40.0),
 ('WXYZ9999', '_1000', None),
 ('WXYZ9999', '_10', None),
 ('WXYZ9999', '_10235', None),
 ('WXYZ9999', '_1024', None)]

或者,如果您只想获得列的数字部分,则下面的内容将

>>> yourStartDataset.flatMapValues(lambda v: v.asDict().items()).map(lambda (a, (b, c)): (a, b[1:], c)).collect()

[('ABCD1234', '1000000', None),
 ('ABCD1234', '100000000', None),
 ('ABCD1234', '100', None),
 ('ABCD1234', '10000', None),
 ('ABCD1234', '0', 6.0),
 ('ABCD1234', '1000', None),
 ('ABCD1234', '10', None),
 ('ABCD1234', '10235', None),
 ('ABCD1234', '1024', None),
 ('WXYZ9999', '1000000', None),
 ('WXYZ9999', '100000000', None),
 ('WXYZ9999', '100', None),
 ('WXYZ9999', '10000', None),
 ('WXYZ9999', '0', 40.0),
 ('WXYZ9999', '1000', None),
 ('WXYZ9999', '10', None),
 ('WXYZ9999', '10235', None),
 ('WXYZ9999', '1024', None)]

希望这是有帮助的