如何在Spark中正确连接两个数据框

时间:2019-12-05 15:52:05

标签: scala apache-spark

给出以下数据集:

productsMetadataDF

{'asin': '0006428320', 'title': 'Six Sonatas For Two Flutes Or Violins, Volume 2 (#4-6)', 'price': 17.95, 'imUrl': 'http://ecx.images-amazon.com/images/I/41EpRmh8MEL._SY300_.jpg', 'salesRank': {'Musical Instruments': 207315}, 'categories': [['Musical Instruments', 'Instrument Accessories', 'General Accessories', 'Sheet Music Folders']]}

productsRatingsDF

{"reviewerID": "AORCXT2CLTQFR", "asin": "0006428320", "reviewerName": "Justo Roteta", "helpful": [0, 0], "overall": 4.0, "summary": "Not a classic but still a good album from Yellowman.", "unixReviewTime": 1383436800, "reviewTime": "11 3, 2013"}

和此功能:

def findProductFeatures(productsRatingsDF : DataFrame, productsMetadataDF : DataFrame) : DataFrame = {
    productsRatingsDF
      .withColumn("averageRating", avg("overall"))
      .join(productsMetadataDF,"asin")
      .select($"asin", $"categories", $"price", $"averageRating")
  }

这是基于asin联接这两个数据集的正确方法吗?

这是我得到的错误:

Exception in thread "main" org.apache.spark.sql.AnalysisException: grouping expressions sequence is empty, and '`asin`' is not an aggregate function. Wrap '(avg(`overall`) AS `averageRating`)' in windowing function(s) or wrap '`asin`' in first() (or first_value) if you don't care which value you get.;;
Aggregate [asin#6, helpful#7, overall#8, reviewText#9, reviewTime#10, reviewerID#11, reviewerName#12, summary#13, unixReviewTime#14L, avg(overall#8) AS averageRating#99]
+- Relation[asin#6,helpful#7,overall#8,reviewText#9,reviewTime#10,reviewerID#11,reviewerName#12,summary#13,unixReviewTime#14L] json

我了解错误的正确性,加入方式是否有错误? 我尝试更改.withColumn和.join的顺序,但是没有用。 当我尝试根据asin编号将avg(“ overall”)输入到列中时,似乎也出现了错误。

最终结果应该是,我得到一个包含4列“ asin”,“ categories”,“ price”和“ averageRating”的数据框。

1 个答案:

答案 0 :(得分:1)

问题似乎是:

.withColumn("averageRating", avg("overall"))

加入之前进行适当的汇总:

df
.groupBy("asin") // your columns
.agg(avg("overall").as("averageRating"))