将csv dict列转换为行pyspark

时间:2019-12-06 11:05:16

标签: python apache-spark pyspark pyspark-dataframes

我的csv文件包含两列

  1. 编号
  2. cbgs(用“”括起来的字典密钥对值)

示例Csv数据在记事本中看起来像 单元格B2包含json密钥对作为字符串。

id,cbgs sg:bd1f26e681264baaa4b44083891c886a,"{""060372623011"":166,""060372655203"":70,""060377019021"":34}" sg:04c7f777f01c4c75bbd9e43180ce811f,"{""060372073012"":7}"

现在尝试进行如下转换

id,cbgs,value
sg:bd1f26e681264baaa4b44083891c886a,060372623011,166
sg:bd1f26e681264baaa4b44083891c886a,060372655203,70
sg:bd1f26e681264baaa4b44083891c886a,060377019021,34
sg:04c7f777f01c4c75bbd9e43180ce811f,060372073012,7

我尝试过的

1.Attempt1

from pyspark.sql.functions import udf, explode
import json        
fifa_df = spark.read.csv("D:\\1. Work\\Safegraph\\Sample Files\\Los Angels\\csv.csv", inferSchema = True, header = True)
fifa_df.printSchema() 
   df2.select("item",explode(parse("cbgs")).alias("recom_item","recom_cnt")).show()
  

错误消息:

     

无法解析给定的输入列“ item”:[id,cbgs,recom_item,   recom_cnt] ;;

我按照DrChess的建议尝试了以下代码,但将空列表作为输出。

fifa_df.withColumn("cbgs", F.from_json("cbgs", T.MapType(T.StringType(), T.IntegerType()))).select("id", F.explode(["visitor_home_cbgs"]).alias('cbgs', 'value')).show()






+------------------+----+-----+
|safegraph_place_id|cbgs|value|
+------------------+----+-----+
+------------------+----+-----+

2 个答案:

答案 0 :(得分:2)

您需要首先将json解析为Map<String, Integer>,然后分解地图。您可以这样做:

import pyspark.sql.types as T
import pyspark.sql.functions as F

...

df2.withColumn("cbgs", F.from_json("cbgs", T.MapType(T.StringType(), T.IntegerType()))).select("id", F.explode("cbgs").alias('cbgs', 'value')).show()

答案 1 :(得分:2)

这是我遵循的内容。这仅涉及字符串处理操作,而不涉及复杂的数据类型处理。

  1. 使用escape选项将源csv文件读取为" df=spark.read.format('csv').option('header','True').option('escape','"')

|id                                 |cbgs                                                    |
+-----------------------------------+--------------------------------------------------------+
|sg:bd1f26e681264baaa4b44083891c886a|{"060372623011":166,"060372655203":70,"060377019021":34}|
|sg:04c7f777f01c4c75bbd9e43180ce811f|{"060372073012":7}                                      |
+-----------------------------------+--------------------------------------------------------+
  1. 第二列作为字符串而不是映射加载。现在splitdf=df.withColumn('cbgs',split(df['cbgs'],','))
+-----------------------------------+------------------------------------------------------------+
|id                                 |cbgs                                                        |
+-----------------------------------+------------------------------------------------------------+
|sg:bd1f26e681264baaa4b44083891c886a|[{"060372623011":166, "060372655203":70, "060377019021":34}]|
|sg:04c7f777f01c4c75bbd9e43180ce811f|[{"060372073012":7}]                                        |
+-----------------------------------+------------------------------------------------------------+

3。以后爆炸。

df=df.withColumn('cbgs',explode(df['cbgs']))

+-----------------------------------+-------------------+
|id                                 |cbgs               |
+-----------------------------------+-------------------+
|sg:bd1f26e681264baaa4b44083891c886a|{"060372623011":166|
|sg:bd1f26e681264baaa4b44083891c886a|"060372655203":70  |
|sg:bd1f26e681264baaa4b44083891c886a|"060377019021":34} |
|sg:04c7f777f01c4c75bbd9e43180ce811f|{"060372073012":7} |
+-----------------------------------+-------------------+
  1. 使用regex从cbgs列中提取值- df=df.select(df['id'],regexp_extract(df['cbgs'],'(\d+)":(\d+)',1).alias('cbgs'),regexp_extract(df['cbgs'],'(\d+)":(\d+)',2).alias('value'))
+-----------------------------------+------------+-----+
|id                                 |cbgs        |value|
+-----------------------------------+------------+-----+
|sg:bd1f26e681264baaa4b44083891c886a|060372623011|166  |
|sg:bd1f26e681264baaa4b44083891c886a|060372655203|70   |
|sg:bd1f26e681264baaa4b44083891c886a|060377019021|34   |
|sg:04c7f777f01c4c75bbd9e43180ce811f|060372073012|7    |
+-----------------------------------+------------+-----+
  1. 写入csv。