以下Pyspark代码更有效的编写方式?

时间:2018-10-19 01:45:59

标签: xml pyspark apache-spark-sql

我成功地使用PySpark编写了一个小的脚本,以从一个大的.xml文件中检索和组织数据。作为使用PySpark的新手,我想知道是否有更好的方法编写以下代码:

import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.functions import monotonically_increasing_id

### xml file from https://wit3.fbk.eu/
sc = SparkSession.builder.getOrCreate()
df = sc.read.format("com.databricks.spark.xml").option("rowTag","transcription").load('ted_en-20160408.xml')
df_values = df.select("seekvideo._VALUE")
df_id = df.select("seekvideo._id")
df_values = df_values.withColumn("id", monotonically_increasing_id())
df_id = df_id.withColumn("id", monotonically_increasing_id())
result = df_values.join(df_id, "id", "outer").drop("id")
answer = result.toPandas()

transcription = dict()
for talk in range(len(ted)):
    if not answer._id.iloc[talk]:
        continue
    transcription[talk] = zip(answer._id.iloc[talk], answer._VALUE.iloc[talk])

其中df的格式为:

DataFrame[_corrupt_record: string, seekvideo: array<struct<_VALUE:string,_id:bigint>>]

transcription是按位置键控的每个TED Talk转录的字典。例如,transcription[0]的格式为:

[(800, u'When I moved to Harare in 1985,'),
 (4120,
  u"social justice was at the core of Zimbabwe's national health policy."),
 (8920, u'The new government emerged from a long war of independence'),
 (12640, u'and immediately proclaimed a socialist agenda:'),
 (15480, u'health care services, primary education'),
...
]

如果您有任何建议,我将不胜感激。谢谢!

0 个答案:

没有答案