Spark 2.0将json读入一个带有键的引号的数据帧 - 与spark 1.6 ... bug不同的行为?

时间:2016-08-10 19:56:55

标签: json apache-spark pyspark spark-dataframe

我们处理混乱的传入json数据是不幸的,并且发现Spark 2.0(pyspark)处理json键中引号的方式有所不同。

如果我们使用以下作为示例文件(sample.json):

{"event":"abc"}
{"event":"xyz","otherdata[\"this.is.ugly\"]":"value1"}

在Spark 1.6.2中,我们可以运行以下命令并获得结果:

from pyspark import  SparkConf
from pyspark.sql import SQLContext
conf = SparkConf()
conf.setAppName('temp_quotes')

sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)

data =     sqlContext.read.json("sample.json")
data.printSchema()

结果是:

root
 |-- event: string (nullable = true)
 |-- otherdata["this.is.ugly"]: string (nullable = true)

我们在进行节目时可以看到数据:

data.show(2)

+-----+-------------------------+
|event|otherdata["this.is.ugly"]|
+-----+-------------------------+
|  abc|                     null|
|  xyz|                   value1|
+-----+-------------------------+

但是,在Spark 2.0中运行相同的代码会显示相同的架构:

from pyspark import  SparkConf
from pyspark.sql import SQLContext
conf = SparkConf()
conf.setAppName('temp_quotes')

sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)

data =     sqlContext.read.json("sample.json")
data.printSchema()

root
 |-- event: string (nullable = true)
 |-- otherdata["this.is.ugly"]: string (nullable = true)

但节目失败了:

data.show(2)

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/spark/python/pyspark/sql/dataframe.py", line 287, in show
    print(self._jdf.showString(n, truncate))
  File "/usr/lib/spark/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py", line 933, in __call__
  File "/usr/lib/spark/python/pyspark/sql/utils.py", line 69, in deco
    raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: u'Unable to resolve otherdata["this.is.ugly"] given [event, otherdata["this.is.ugly"]];'

这是一个错误还是我错过了Spark 2.0中的参数?

1 个答案:

答案 0 :(得分:3)

我相信这是在https://issues.apache.org/jira/browse/SPARK-16698(JSON键中的点)中解决的。该修复程序计划在2.0.1中发布

(我没有足够的声誉发表评论)