在pyspark中读取嵌套的JSON文件

时间:2019-09-05 18:47:34

标签: json pyspark

我想从hdfs中的json文件创建pyspark数据框。

json文件具有以下内容:

  

{    “产品”:{      “ 0”:“台式计算机”,      “ 1”:“平板电脑”,      “ 2”:“ iPhone”,      “ 3”:“笔记本电脑”    },    “价钱”: {      “ 0”:700,      “ 1”:250,      “ 2”:800,      “ 3”:1200    }   }

然后,我使用pyspark 2.4.4 df = spark.read.json("/path/file.json")

读取了此文件

所以,我得到这样的结果:

df.show(truncate=False)
+---------------------+---------------------------------+
|Price                |Product                          |
+---------------------+---------------------------------+
|[700, 250, 800, 1200]|[Desktop, Tablet, Iphone, Laptop]|
+---------------------+---------------------------------+

但是我想要一个具有以下结构的数据框:

+-------+--------+
|Price  |Product |
+-------+--------+
|700    |Desktop | 
|250    |Tablet  |
|800    |Iphone  |
|1200   |Laptop  |
+-------+--------+

如何使用pyspark获取具有先前结构的数据框?

我尝试使用爆炸df.select(explode("Price")),但出现以下错误:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
/usr/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:

/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:

Py4JJavaError: An error occurred while calling o688.select.
: org.apache.spark.sql.AnalysisException: cannot resolve 'explode(`Price`)' due to data type mismatch: input to function explode should be array or map type, not struct<0:bigint,1:bigint,2:bigint,3:bigint>;;
'Project [explode(Price#107) AS List()]
+- LogicalRDD [Price#107, Product#108], false

    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:97)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:89)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:95)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:95)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:107)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:107)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:106)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:118)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1$1.apply(QueryPlan.scala:122)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:122)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:127)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:127)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:95)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:89)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:84)
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:84)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:92)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
    at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
    at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:3301)
    at org.apache.spark.sql.Dataset.select(Dataset.scala:1312)
    at sun.reflect.GeneratedMethodAccessor47.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)


During handling of the above exception, another exception occurred:

AnalysisException                         Traceback (most recent call last)
<ipython-input-46-463397adf153> in <module>
----> 1 df.select(explode("Price"))

/usr/lib/spark/python/pyspark/sql/dataframe.py in select(self, *cols)
   1200         [Row(name=u'Alice', age=12), Row(name=u'Bob', age=15)]
   1201         """
-> 1202         jdf = self._jdf.select(self._jcols(*cols))
   1203         return DataFrame(jdf, self.sql_ctx)
   1204 

/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/usr/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     67                                              e.java_exception.getStackTrace()))
     68             if s.startswith('org.apache.spark.sql.AnalysisException: '):
---> 69                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
     70             if s.startswith('org.apache.spark.sql.catalyst.analysis'):
     71                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)

AnalysisException: "cannot resolve 'explode(`Price`)' due to data type mismatch: input to function explode should be array or map type, not struct<0:bigint,1:bigint,2:bigint,3:bigint>;;\n'Project [explode(Price#107) AS List()]\n+- LogicalRDD [Price#107, Product#108], false\n"

3 个答案:

答案 0 :(得分:1)

重新创建您的DataFrame:

from pyspark.sql import functions as F

df = spark.read.json("./row.json") 
df.printSchema()
#root
# |-- Price: struct (nullable = true)
# |    |-- 0: long (nullable = true)
# |    |-- 1: long (nullable = true)
# |    |-- 2: long (nullable = true)
# |    |-- 3: long (nullable = true)
# |-- Product: struct (nullable = true)
# |    |-- 0: string (nullable = true)
# |    |-- 1: string (nullable = true)
# |    |-- 2: string (nullable = true)
# |    |-- 3: string (nullable = true)

如上面printSchema输出中所示,您的PriceProduct列为struct。因此explode将不起作用,因为它需要ArrayTypeMapType

首先,使用struct表示法将arrays转换为.*,如Querying Spark SQL DataFrame with complex types所示:

df = df.select(
    F.array(F.expr("Price.*")).alias("Price"),
    F.array(F.expr("Product.*")).alias("Product")
)

df.printSchema()

#root
# |-- Price: array (nullable = false)
# |    |-- element: long (containsNull = true)
# |-- Product: array (nullable = false)
# |    |-- element: string (containsNull = true)

现在,由于您正在使用 Spark 2.4 + ,因此在使用{{1之前,您可以使用arrays_zipPriceProduct数组压缩在一起}}:

explode

对于旧版本的Spark,在df.withColumn("price_product", F.explode(F.arrays_zip("Price", "Product")))\ .select("price_product.Price", "price_product.Product")\ .show() #+-----+----------------+ #|Price| Product| #+-----+----------------+ #| 700|Desktop Computer| #| 250| Tablet| #| 800| iPhone| #| 1200| Laptop| #+-----+----------------+ 之前,您可以分别展开每一列并将结果重新组合在一起:

arrays_zip

答案 1 :(得分:1)

对于没有array_zip的Spark版本,我们也可以这样做:

  1. 首先将json文件读入DataFrame
df=spark.read.json("your_json_file.json")
df.show(truncate=False)

+---------------------+------------------------------------------+
|Price                |Product                                   |
+---------------------+------------------------------------------+
|[700, 250, 800, 1200]|[Desktop Computer, Tablet, iPhone, Laptop]|
+---------------------+------------------------------------------+

接下来,将struct展开为array

df = df.withColumn('prc_array', F.array(F.expr('Price.*')))
df = df.withColumn('prod_array', F.array(F.expr('Product.*')))

然后在两个数组之间创建一个映射

df = df.withColumn('prc_prod_map', F.map_from_arrays('prc_array', 'prod_array'))
df.select('prc_array', 'prod_array', 'prc_prod_map').show(truncate=False)


+---------------------+------------------------------------------+-----------------------------------------------------------------------+
|prc_array            |prod_array                                |prc_prod_map                                                           |
+---------------------+------------------------------------------+-----------------------------------------------------------------------+
|[700, 250, 800, 1200]|[Desktop Computer, Tablet, iPhone, Laptop]|[700 -> Desktop Computer, 250 -> Tablet, 800 -> iPhone, 1200 -> Laptop]|
+---------------------+------------------------------------------+-----------------------------------------------------------------------+

最后,在地图上应用explode

df = df.select(F.explode('prc_prod_map').alias('prc', 'prod'))
df.show(truncate=False)

+----+----------------+
|prc |prod            |
+----+----------------+
|700 |Desktop Computer|
|250 |Tablet          |
|800 |iPhone          |
|1200|Laptop          |
+----+----------------+

通过这种方式,我们避免了对两个表进行耗时的join操作。

答案 2 :(得分:0)

如果您使用的是<2.4.4 然后,下面给出答案。 但是,对于Json的奇怪模式,我无法使其通用 在现实生活中的示例中,请创建格式更好的json

PYSPARK版本

>>> from pyspark.sql import Row
>>> json_df = spark.read.json("file.json") # File in current directory
>>> json_df.show(20,False) # We only have 1 Row with two StructType columns
    +---------------------+------------------------------------------+
    |Price                |Product                                   |
    +---------------------+------------------------------------------+
    |[700, 250, 800, 1200]|[Desktop Computer, Tablet, iPhone, Laptop]|
    +---------------------+------------------------------------------+
   >>> # We convert dataframe to Row and Zip two nested Rows Assuming there 
         #will be no gap in values
    >>> spark.createDataFrame(zip(json_df.first().__getitem__(0), json_df.first().__getitem__(1)), schema=["Price", "Product"]).show(20,False)

         +-----+----------------+
         |Price|Product         |
         +-----+----------------+
         |700  |Desktop Computer|
         |250  |Tablet          |
         |800  |iPhone          |
         |1200 |Laptop          |
         +-----+----------------+

SCALA版本(没有首选案例类方法)

    scala> val sparkDf = spark.read.json("file.json")
sparkDf: org.apache.spark.sql.DataFrame = [Price: struct<0: bigint, 1: bigint ... 2 more fields>, Product: struct<0: string, 1: string ... 2 more fields>]

scala> sparkDf.show(false)
+---------------------+------------------------------------------+
|Price                |Product                                   |
+---------------------+------------------------------------------+
|[700, 250, 800, 1200]|[Desktop Computer, Tablet, iPhone, Laptop]|
+---------------------+------------------------------------------+
scala> import spark.implicits._
import spark.implicits._

scala> (sparkDf.first.getStruct(0).toSeq.asInstanceOf[Seq[Long]], sparkDf.first.getStruct(1).toSeq.asInstanceOf[Seq[String]]).zipped.toList.toDF("Price","Product")
res6: org.apache.spark.sql.DataFrame = [Price: bigint, Product: string]

scala> // We do same thing but able to use methods of Row  use Spark Implicits to get DataSet Directly

scala> (sparkDf.first.getStruct(0).toSeq.asInstanceOf[Seq[Long]], sparkDf.first.getStruct(1).toSeq.asInstanceOf[Seq[String]]).zipped.toList.toDF("Price","Product").show(false)
+-----+----------------+
|Price|Product         |
+-----+----------------+
|700  |Desktop Computer|
|250  |Tablet          |
|800  |iPhone          |
|1200 |Laptop          |
+-----+----------------+