PySpark:拆分df帧n次

时间:2017-11-06 21:39:53

标签: python apache-spark pyspark spark-dataframe

我正在寻找一种方法来分割火花数据帧n次,就像使用pythons string split方法一样。

我有一个日志文件,每个文件有1100万行+。我需要将df精确地拆分3次在“”(空格)上,因为我有其他数据需要空格,所以使用数据帧拆分会造成混乱; request.useragent是弄乱分裂的。

2017-09-24T00:17:01+00:00 dev-lb01 proxy[49]: {"backend_connect_time_ms":0,"request.useragent":"Mozilla\/5.0 (Linux; Android 5.1; ASUS_Z00VD Build\/LMY47I; wv) AppleWebKit\/537.36 (KHTML, like Gecko) Version\/4.0 Chrome\/43.0.235","resp.code":304,"retries_count":0,"session_duration_ms":979,"srv_conn_count":31,"srv_queue_count":0,"termination_state":"--","timestamp":1506212220}

通缉输出

date                        host       app         json
2017-09-24T00:17:01+00:00 | dev-lb01 | proxy[49]: | {"backend_connect_time_ms":0,"request.useragent":"Mozilla\/5.0 (Linux; Android 5.1; ASUS_Z00VD Build\/LMY47I; wv) AppleWebKit\/537.36 (KHTML, like Gecko) Version\/4.0 Chrome\/43.0.235","resp.code":304,"retries_count":0,"session_duration_ms":979,"srv_conn_count":31,"srv_queue_count":0,"termination_state":"--","timestamp":1506212220}

我考虑过变成熊猫DF,但内存消耗将成为一个问题。我试图避免使用rdd.map.collect(),然后使用python string方法拆分并转回数据帧,因为它需要大量的开销。

1 个答案:

答案 0 :(得分:1)

这可以通过分割\\s(?![^\\{]*\\})而不是仅仅在空间上来解决。例如:

split_col = pyspark.sql.functions.split(df['my_str_col'], '\\s(?![^\\{]*\\})')
df = df.withColumn('date', split_col.getItem(0))
  .withColumn('host', split_col.getItem(1))
  .withColumn('app', split_col.getItem(2))
  .withColumn('json', split_col.getItem(3))