我正在尝试将基于df1.portfolio名称的2个数据帧连接到df2.portId 结果数据帧我不希望重复相同的键。
这是我到目前为止的代码
val df = spark.read.json("C:\\json\\portmast")
val pgetsec = spark.read.json("C:\\json\\pgetsec")
val portfolio_master = df.select("PortfolioCode","Legal Entity Name","Asofdate")
val pgetsecs= pgetsec.select("TransId", "SecId","portId","GaapCurBkBal","ParBal","SetlDt","SetlPric","OrgBkBal","TradeDt","StatCurBkBal","NaicRtg","SecurityTypeCode","CamraSecType","FundType","CountryIso")
val pg = portfolio_master.join(pgetsec,Seq("PortfolioCode","portId"),"left_outer")
我得到的错误是
Exception in thread "main" org.apache.spark.sql.AnalysisException: using columns ['PortfolioCode,'portId] can not be resolved given input columns:
最终的json应该看起来像这样
|-- Portfolio Code: string (nullable = true)
|-- Legal Entity Name: string (nullable = true)
|-- Asofdate: string (nullable = true)
((SI, S&P 500 Index,9/30/2016),[0.0,Equity,Common Stock])
((SI, S&P 500 Index,9/30/2016),[0.0,Equity,Common Stock])
((SI, S&P 500 Index,9/30/2016),[0.0,Equity,Common Stock])
[SI1, S&P 500 Index,9/30/2016,CompactBuffer([0.0,Equity,Common Stock], [0.0,Equity,Common Stock], [0.0,Equity,Common Stock])]
root
|-- Portfolio Code: string (nullable = true)
|-- Legal Entity Name: string (nullable = true)
|-- Asofdate: string (nullable = true)
|-- Security: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- BondPrice: double (nullable = true)
| | |-- CoreSectorLevel1Code: string (nullable = true)
| | |-- CoreSectorLevel2Code: string (nullable = true)
+--------------+-------------------+---------+--------------------+
|Portfolio Code| Legal Entity Name| Asofdate| Security|
+--------------+-------------------+---------+--------------------+
| SI | S&P 500 Index |9/30/2016|[[0.0,Equity,Comm...|
+--------------+-------------------+---------+--------------------+
感谢任何帮助。
答案 0 :(得分:2)
portId
中不存在<portfolio_master
,而PortfolioCode
中不存在pgetsec
。如果您重新阅读完整的错误消息,您会看到它解释了这一点,因为它还显示了可用的列。
您想要的是portfolio_master("PortfolioCode") === pgetsec("portId")
作为您的加入条件。