我有这个简单的代码
var count = event_stream
.groupBy("value").count()
event_stream.join(count,"value").printSchema() //get error on this line
event_stream和count模式如下
root
|-- key: binary (nullable = true)
|-- value: binary (nullable = true)
|-- topic: string (nullable = true)
|-- partition: integer (nullable = true)
|-- offset: long (nullable = true)
|-- timestamp: timestamp (nullable = true)
|-- timestampType: integer (nullable = true)
root
|-- value: binary (nullable = true)
|-- count: long (nullable = false)
两个问题
为什么会出现此错误以及如何解决?
为什么groupby.count删除所有其他列?
错误如下
Exception in thread "main" org.apache.spark.sql.AnalysisException:
Failure when resolving conflicting references in Join:
'Join Inner
:- AnalysisBarrier
: +- StreamingRelationV2 org.apache.spark.sql.kafka010.KafkaSourceProvider@7f2c57fe, kafka, Map(startingOffsets -> latest, failOnDataLoss -> false, subscribe -> events-identification-carrier, kafka.bootstrap.servers -> svc-kafka-pre-c1-01.jamba.net:9092), [key#7, value#8, topic#9, partition#10, offset#11L, timestamp#12, timestampType#13], StreamingRelation DataSource(org.apache.spark.sql.SparkSession@3dbd7107,kafka,List(),None,List(),None,Map(startingOffsets -> latest, failOnDataLoss -> false, subscribe -> events-identification-carrier, kafka.bootstrap.servers -> svc-kafka-pre-c1-01.jamba.net:9092),None), kafka, [key#0, value#1, topic#2, partition#3, offset#4L, timestamp#5, timestampType#6]
+- AnalysisBarrier
+- Aggregate [value#8], [value#8, count(1) AS count#46L]
+- StreamingRelationV2 org.apache.spark.sql.kafka010.KafkaSourceProvider@7f2c57fe, kafka, Map(startingOffsets -> latest, failOnDataLoss -> false, subscribe -> events-identification-carrier, kafka.bootstrap.servers -> svc-kafka-pre-c1-01.jamba.net:9092), [key#7, value#8, topic#9, partition#10, offset#11L, timestamp#12, timestampType#13], StreamingRelation DataSource(org.apache.spark.sql.SparkSession@3dbd7107,kafka,List(),None,List(),None,Map(startingOffsets -> latest, failOnDataLoss -> false, subscribe -> events-identification-carrier, kafka.bootstrap.servers -> svc-kafka-pre-c1-01.jamba.net:9092),None), kafka, [key#0, value#1, topic#2, partition#3, offset#4L, timestamp#5, timestampType#6]
Conflicting attributes: value#8
编辑:是的!更改列名是可行的。 但是现在,如果使用联接,则必须使用 OutputMode.Append ,为此,我需要向流中添加水印。
我想要的是从结果DF中提取计数和主题(从上面的打印模式中提取)并将其写入到某些接收器中。
两个问题
答案 0 :(得分:1)
为什么会出现此错误以及如何解决?
我认为您会收到错误消息,因为最终的联接模式包含两个值字段,一个在联接的每一侧。要解决此问题,请在两个已连接流之一上重命名“值”字段,如下所示:
var count = event_stream.
groupBy("value").count().
withColumnRenamed("value", "join_id")
event_stream.join(count, $"value" === $"join_id").
drop("join_id").
printSchema()
为什么groupby.count删除所有其他列?
groupBy
操作基本上是将您的字段分为两个列表。用作键的字段列表和要聚合的字段列表。关键字段只是按最终结果显示,但是列表中未列出的任何字段都需要定义一个汇总操作才能显示在结果中。否则,spark无法知道如何合并该字段的多个值!您只想数数吗?您想要最大值吗?您是否要查看所有不同的值?要指定如何汇总字段,可以在.agg(..)调用中对其进行定义。
示例:
val input = Seq(
(1, "Bob", 4),
(1, "John", 5)
).toDF("key", "name", "number")
input.groupBy("key").
agg(collect_set("name") as "names",
max("number") as "maxnum").
show
+---+-----------+------+
|key|name |maxnum|
+---+-----------+------+
| 1|[Bob, John]| 5|
+---+-----------+------+
答案 1 :(得分:-1)
错误的原因是用于连接的列名。 您可以使用类似的操作。
var count = event_stream
.groupBy("value").count()
event_stream.join(count,Seq("value"))