我正在以40亿行的数据大小运行此查询并获取
org.apache.spark.shuffle.FetchFailedException错误。
select adid,position,userid,price
from (
select adid,position,userid,price,
dense_rank() OVER (PARTITION BY adlocationid ORDER BY price DESC) as rank
FROM trainInfo) as tmp
WHERE rank <= 2
我已经附加了来自spark-sql终端的错误日志。请说明这些错误的原因是什么,以及如何解决它们。