SparkSQL:忽略无效的json文件

时间:2014-11-20 09:25:30

标签: apache-spark apache-spark-sql

我正在使用SparkSQL加载一堆JSON文件,但有些文件存在问题。

我想继续处理其他文件而忽略了坏文件,我该怎么做?

我尝试使用try-catch但它仍然失败。例如:

try {
    val sqlContext = new org.apache.spark.sql.SQLContext(sc)
    import sqlContext._

    val jsonFiles=sqlContext.jsonFile("/requests.loading")
} catch {
    case _: Throwable => // Catching all exceptions and not doing anything with them
}

我失败了:

14/11/20 01:20:44 INFO scheduler.TaskSetManager: Starting task 3065.0 in stage 1.0 (TID 6150, HDdata2, NODE_LOCAL, 1246 bytes)<BR>
14/11/20 01:20:44 WARN scheduler.TaskSetManager: Lost task 3027.1 in stage 1.0 (TID 6130, HDdata2): com.fasterxml.jackson.core.JsonParseException: Unexpected end-of-input: was expecting closing quote for a string value
 at [Source: java.io.StringReader@753ab9f1; line: 1, column: 1805]

1 个答案:

答案 0 :(得分:1)

如果您使用的是Spark 1.2,Spark SQL将为您处理那些损坏的JSON记录。这是一个例子......

// requests.loading has some broken records
val jsonFiles=sqlContext.jsonFile("/requests.loading")
// Look at the schema of jsonFiles, you will see a new column called "_corrupt_record", which holds all broken JSON records
// jsonFiles.printSchema
// Register jsonFiles as a table
jsonFiles.registerTempTable("jsonTable")
// To query all normal records
sqlContext.sql("SELECT * FROM jsonTable WHERE _corrupt_record IS NULL")
// To query all broken JSON records
sqlContext.sql("SELECT _corrupt_record FROM jsonTable WHERE _corrupt_record IS NOT NULL")