是否可以在Spark 2.2.1中加入两个Spark结构化流?在Spark结构化流媒体中执行非常简单的操作时,我发现了很多问题。文档和示例数量似乎仅限于我。 我有两个流数据来源:
persons.json:
[
{"building_id": 70, "id": 21, "latitude": 41.20, "longitude": 2.2, "timestamp": 1532609003},
{"building_id": 70, "id": 15, "latitude": 41.24, "longitude": 2.3, "timestamp": 1532609005},
{"building_id": 71, "id": 11, "latitude": 41.28, "longitude": 2.1, "timestamp": 1532609005}
]
machines.json
[
{"building_id": 70, "mid": 222, "latitude": 42.1, "longitude": 2.11}
]
目标是获得具有人和机器的纬度和经度的合并DataFrame。我需要它以便实时估计它们之间的距离:
building_id id mid latitude longitude latitude_machine longitude_machine
70 21 222 41.20 2.2 42.1 2.11
# ...
如果不可能将两个流合并在一起,那么我将不胜感激建议一些可行的解决方法。
代码:
spark = SparkSession \
.builder \
.appName("Test") \
.master("local[2]") \
.getOrCreate()
schema_persons = StructType([
StructField("building_id", IntegerType()),
StructField("id", IntegerType()),
StructField("latitude", DoubleType()),
StructField("longitude", DoubleType()),
StructField("timestamp", LongType())
])
schema_machines = StructType([
StructField("building_id", IntegerType()),
StructField("mid", IntegerType()),
StructField("latitude", DoubleType()),
StructField("longitude", DoubleType())
])
df_persons = spark \
.readStream \
.format("json") \
.schema(schema_persons) \
.load("data/persons")
df_machines = spark \
.readStream \
.format("json") \
.schema(schema_machines) \
.load("data/machines") \
.withColumnRenamed("latitude", "latitude_machine") \
.withColumnRenamed("longitude", "longitude_machine")
df_joined = df_persons\
.join(df_machines, ["building_id"], "left")
query_persons = df_persons \
.writeStream \
.format('console') \
.start()
query_machines = df_machines \
.writeStream \
.format('console') \
.start()
query_persons.awaitTermination()
query_machines.awaitTermination()