我是Spark和Scala的新手,所以请原谅。我有一个文本文件,格式如下:
for t in stride(from: 0, through: duration, by: frameDelta) {
draw(in: renderBuffer, depthTexture: depthBuffer, time: t) { (texture) in
recorder.writeFrame(forTexture: texture, time: t)
}
}
我已经可以使用sc.textFile命令创建RDD,并且可以使用以下命令处理每个部分:
328;ADMIN HEARNG;[street#939 W El Camino,city#Chicago,state#IL]
但是,如您所见,第3个元素是嵌套键/值对,到目前为止,我无法使用它。我正在寻找的是一种将上述数据转换为如下所示的RDD的方法:
val department_record = department_rdd.map(record => record.split(";"))
感谢您的帮助。
答案 0 :(得分:0)
您可以将,
处的地址字段拆分为一个数组,剥去括号,然后在#
处再次拆分以提取所需的地址分量,如下所示:
val department_rdd = sc.parallelize(Seq(
"328;ADMIN HEARNG;[street#939 W El Camino,city#Chicago,state#IL]",
"400;ADMIN HEARNG;[street#800 First Street,city#San Francisco,state#CA]"
))
val department_record = department_rdd.
map(_.split(";")).
map{ case Array(id, name, address) =>
val addressArr = address.split(",").
map(_.replaceAll("^\\[|\\]$", "").split("#"))
(id, name, addressArr(0)(1), addressArr(1)(1), addressArr(2)(1))
}
department_record.collect
// res1: Array[(String, String, String, String, String)] = Array(
// (328,ADMIN HEARNG,939 W El Camino,Chicago,IL),
// (400,ADMIN HEARNG,800 First Street,San Francisco,CA)
// )
如果要转换为DataFrame,只需应用toDF()
:
department_record.toDF("id", "name", "street", "city", "state").show
// +---+------------+----------------+-------------+-----+
// | id| name| street| city|state|
// +---+------------+----------------+-------------+-----+
// |328|ADMIN HEARNG| 939 W El Camino| Chicago| IL|
// |400|ADMIN HEARNG|800 First Street|San Francisco| CA|
// +---+------------+----------------+-------------+-----+
答案 1 :(得分:0)
DF解决方案:
scala> val df = Seq(("328;ADMIN HEARNG;[street#939 W El Camino,city#Chicago,state#IL]"),
| ("400;ADMIN HEARNG;[street#800 First Street,city#San Francisco,state#CA]")).toDF("dept")
df: org.apache.spark.sql.DataFrame = [dept: string]
scala> val df2 =df.withColumn("arr",split('dept,";")).withColumn("address",split(regexp_replace('arr(2),"\\[|\\]",""),"#"))
df2: org.apache.spark.sql.DataFrame = [dept: string, arr: array<string> ... 1 more field]
scala> df2.select('arr(0) as "id",'arr(1) as "name",split('address(1),",")(0) as "street",split('address(2),",")(0) as "city",'address(3) as "state").show
+---+------------+----------------+-------------+-----+
| id| name| street| city|state|
+---+------------+----------------+-------------+-----+
|328|ADMIN HEARNG| 939 W El Camino| Chicago| IL|
|400|ADMIN HEARNG|800 First Street|San Francisco| CA|
+---+------------+----------------+-------------+-----+
scala>