我有一个简单的json数组,并且能够在spark-dataframe中读取它。 您能帮忙将这些列包装到custom-root标记中吗? 更精确地说,与explode选项完全相反,它限制了自定义目标根列的整个数据框行。
Initial Json Data:
[{"tpeKeyId":"301461865","acctImplMgrId":null,"acctMgrId":null,"agreCancDt":null,"agreEffDt":null,"pltfrmNm":"EMPLOYEE NAVIGATOR","premPyRmtInd":null,"recCrtTs":"2016-11-08 13:01:44.290418","recCrtUsrId":"testedname","recUpdtTs":"2018-10-16 12:16:21.579446","recUpdtUsrId":"testname","spclInstrFormCd":null,"sysCd":null,"tpeNm":"EMPLOYEE NAVIGATOR","univPrdcrId":"9393939393"},{"tpeKeyId":"901972280","acctImplMgrId":null,"acctMgrId":null,"agreCancDt":null,"agreEffDt":null,"pltfrmNm":"datalion","premPyRmtInd":null,"recCrtTs":"2018-12-10 01:36:14.925833","recCrtUsrId":"exactlydata","recUpdtTs":"2018-12-10 01:36:14.925833","recUpdtUsrId":"datalion ","spclInstrFormCd":null,"sysCd":null,"tpeNm":"lialion","univPrdcrId":"89899898989"}]
First Dataframe:
+-------------+---------+----------+---------+------------------+------------+--------------------------+-----------+--------------------------+----------------+---------------+-----+---------+------------------+-----------+
|acctImplMgrId|acctMgrId|agreCancDt|agreEffDt|pltfrmNm |premPyRmtInd|recCrtTs |recCrtUsrId|recUpdtTs |recUpdtUsrId |spclInstrFormCd|sysCd|tpeKeyId |tpeNm |univPrdcrId|
+-------------+---------+----------+---------+------------------+------------+--------------------------+-----------+--------------------------+----------------+---------------+-----+---------+------------------+-----------+
|null |null |null |null |EMPLOYEE NAVIGATOR|null |2016-11-08 13:01:44.290418|testedname |2018-10-16 12:16:21.579446|testname |null |null |301461865|EMPLOYEE NAVIGATOR|9393939393 |
|null |null |null |null |datalion |null |2018-12-10 01:36:14.925833|exactlydata|2018-12-10 01:36:14.925833|datalion |null |null |901972280|lialion |89899898989|
+-------------+---------+----------+---------+------------------+------------+--------------------------+-----------+--------------------------+----------------+---------------+-----+---------+------------------+-----------+
手动串联根标记后:
val addingRootTag= "{ \"roottag\" :" + fileContents + "}"
val rootTagDf = spark.read.json(Seq(addingRootTag).toDS())
rootTagDf.show(false)
Second Dataframe:
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|roottag |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|[[,,,, EMPLOYEE NAVIGATOR,, 2016-11-08 13:01:44.290418, testedname, 2018-10-16 12:16:21.579446, testname,,, 301461865, EMPLOYEE NAVIGATOR, 9393939393], [,,,, datalion,, 2018-12-10 01:36:14.925833, exactlydata, 2018-12-10 01:36:14.925833, datalion ,,, 901972280, lialion, 89899898989]]|
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
问题是,我们在spark-framework支持的api中是否有任何此类方法,以避免手动串联roottag
并包装 first-dataframe 以显示为 second数据框? EXACTLY OPPOSITE TO EXPLODE OPTION