rdd=sc.textFile(json or xml)
rdd.collect()
[u'{', u' "glossary": {', u' "title": "example glossary",', u'\t\t"GlossDiv": {', u' "title": "S",', u'\t\t\t"GlossList": {', u' "GlossEntry": {', u' "ID": "SGML",', u'\t\t\t\t\t"SortAs": "SGML",', u'\t\t\t\t\t"GlossTerm": "Standard Generalized Markup Language",', u'\t\t\t\t\t"Acronym": "SGML",', u'\t\t\t\t\t"Abbrev": "ISO 8879:1986",', u'\t\t\t\t\t"GlossDef": {', u' "para": "A meta-markup language, used to create markup languages such as DocBook.",', u'\t\t\t\t\t\t"GlossSeeAlso": ["GML", "XML"]', u' },', u'\t\t\t\t\t"GlossSee": "markup"', u' }', u' }', u' }', u' }', u'}', u'']
但我的输出应该是每一个想法在一行
{"glossary": {"title": "example glossary","GlossDiv": {"title": "S","GlossList":.....}}
答案 0 :(得分:4)
我建议使用Spark SQL JSON,然后保存调用toJson(参见https://spark.apache.org/docs/latest/sql-programming-guide.html#json-datasets)
val input = sqlContext.jsonFile(path)
val output = input...
output.toJSON.saveAsTextFile(outputath)
但是,如果由于多行问题或其他问题而无法通过Spark SQL解析您的json记录,我们可以采用Learning Spark book中的一个示例(略有偏见作为co - 当然是作者)并修改它以使用wholeTextFiles
。
case class Person(name: String, lovesPandas: Boolean)
// Read the input and throw away the file names
val input = sc.wholeTextFiles(inputFile).map(_._2)
// Parse it into a specific case class. We use mapPartitions beacuse:
// (a) ObjectMapper is not serializable so we either create a singleton object encapsulating ObjectMapper
// on the driver and have to send data back to the driver to go through the singleton object.
// Alternatively we can let each node create its own ObjectMapper but that's expensive in a map
// (b) To solve for creating an ObjectMapper on each node without being too expensive we create one per
// partition with mapPartitions. Solves serialization and object creation performance hit.
val result = input.mapPartitions(records => {
// mapper object created on each executor node
val mapper = new ObjectMapper with ScalaObjectMapper
mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
mapper.registerModule(DefaultScalaModule)
// We use flatMap to handle errors
// by returning an empty list (None) if we encounter an issue and a
// list with one element if everything is ok (Some(_)).
records.flatMap(record => {
try {
Some(mapper.readValue(record, classOf[ioRecord]))
} catch {
case e: Exception => None
}
})
}, true)
result.filter(_.lovesPandas).map(mapper.writeValueAsString(_))
.saveAsTextFile(outputFile)
}
在Python中:
from pyspark import SparkContext
import json
import sys
if __name__ == "__main__":
if len(sys.argv) != 4:
print "Error usage: LoadJson [sparkmaster] [inputfile] [outputfile]"
sys.exit(-1)
master = sys.argv[1]
inputFile = sys.argv[2]
outputFile = sys.argv[3]
sc = SparkContext(master, "LoadJson")
input = sc.wholeTextFiles(inputFile).map(_._2)
data = input.flatMap(lambda x: json.loads(x))
data.filter(lambda x: 'lovesPandas' in x and x['lovesPandas']).map(
lambda x: json.dumps(x)).saveAsTextFile(outputFile)
sc.stop()
print "Done!"
答案 1 :(得分:1)
改为使用sc.wholeTextFiles()
。
答案 2 :(得分:0)
另请查看sqlContext.jsonFile
:
https://spark.apache.org/docs/1.3.1/sql-programming-guide.html#json-datasets