KSQL如何将没有共同ID的传感器数据主题与指标主题结合在一起

时间:2018-11-06 16:20:00

标签: apache-kafka ksql

我无法控制来自传感器服务器的数据流到主题中。

在主题A中,有(a,b,c,d ...)的传感器数据有多个有效载荷。

主题B中有指示符消息(如1,2,..),告诉我从现在开始,主题A的传入传感器数据属于新对象x而不是x-1

我想将来自主题A的数据与当时来自主题B的当前对象相对应。

我对KSQL和流逻辑很陌生,所以我不知道这是否可行。感觉可能有一个非常简单的解决方案,但我在示例中没有找到类似的东西。

编辑:

传感器数据(主题A)如下所示:

sensorPath                        timestamp  value
simulation/machine/plc/sensor-1 | 1 |        7.0
simulation/machine/plc/sensor-2 | 1 |        2.0
simulation/machine/plc/sensor-1 | 2 |        6.0
simulation/machine/plc/sensor-2 | 2 |        1.0
...
simulation/machine/plc/sensor-1 | 10 |       10.0
simulation/machine/plc/sensor-2 | 10 |       12.0

指标数据(主题B)可能如下所示

informationPath                timestamp   WorkpieceID
simulation/informationString | 1  |        0020181
simulation/informationString | 10 |        0020182

我基本上想将传感器数据与新主题/流中的相应工件匹配。新到达的传感器数据始终属于最新的信息字符串/工件。

所以主题C应该看起来像:

sensorPath                        SensorTimestamp  value WorkpieceID
simulation/machine/plc/sensor-1 | 1 |              7.0 | 0020181
simulation/machine/plc/sensor-2 | 1 |              2.0 | 0020181             
simulation/machine/plc/sensor-1 | 2 |              6.0 | 0020181
simulation/machine/plc/sensor-2 | 2 |              1.0 | 0020181
...
simulation/machine/plc/sensor-1 | 10 |             10.0| 0020182
simulation/machine/plc/sensor-2 | 10 |             12.0| 0020182

所以我需要在topicA.timestamp> = current(topicB.timestamp)上加入一个连接?

1 个答案:

答案 0 :(得分:4)

是的,您可以使用KSQL进行此操作。这是一个可行的示例。如果您想重现以下示例,则在这里将this docker-compose file用于我的测试环境。

首先,我根据您提供的示例填充一些测试数据。我根据当前纪元+2和+10秒编排了时间戳:

  • 传感器测试数据:

    docker run --rm --interactive --network cos_default confluentinc/cp-kafkacat kafkacat -b kafka:29092 -t sensor -P <<EOF
    {"sensorPath":"simulation/machine/plc/sensor-1","value":7.0,"timestamp":1541623171000}
    {"sensorPath":"simulation/machine/plc/sensor-2","value":2.0,"timestamp":1541623171000}
    {"sensorPath":"simulation/machine/plc/sensor-1","value":6.0,"timestamp":1541623231000}
    {"sensorPath":"simulation/machine/plc/sensor-2","value":1.0,"timestamp":1541623231000}
    {"sensorPath":"simulation/machine/plc/sensor-1","value":10.0,"timestamp":1541623771000}
    {"sensorPath":"simulation/machine/plc/sensor-2","value":12.0,"timestamp":1541623771000}
    EOF
    
  • 指标测试数据:

    docker run --rm --interactive --network cos_default confluentinc/cp-kafkacat kafkacat -b kafka:29092 -t indicator -P << EOF
    {"informationPath":"simulation/informationString","WorkpieceID":"0020181","timestamp":1541623171000}
    {"informationPath":"simulation/informationString","WorkpieceID":"0020182","timestamp":1541623771000}
    EOF
    

现在,我启动KSQL CLI:

docker run --network cos_default --interactive --tty --rm \
    confluentinc/cp-ksql-cli:5.0.0 \
    http://ksql-server:8088

在KSQL中,我们可以检查主题中的源数据:

KSQL> PRINT 'sensor' FROM BEGINNING;
Format:JSON
{"ROWTIME":1541624847072,"ROWKEY":"null","sensorPath":"simulation/machine/plc/sensor-1","value":7.0,"timestamp":1541623171000}
{"ROWTIME":1541624847072,"ROWKEY":"null","sensorPath":"simulation/machine/plc/sensor-2","value":2.0,"timestamp":1541623171000}
{"ROWTIME":1541624847072,"ROWKEY":"null","sensorPath":"simulation/machine/plc/sensor-1","value":6.0,"timestamp":1541623231000}
{"ROWTIME":1541624847072,"ROWKEY":"null","sensorPath":"simulation/machine/plc/sensor-2","value":1.0,"timestamp":1541623231000}
{"ROWTIME":1541624847072,"ROWKEY":"null","sensorPath":"simulation/machine/plc/sensor-1","value":10.0,"timestamp":1541623771000}
{"ROWTIME":1541624847072,"ROWKEY":"null","sensorPath":"simulation/machine/plc/sensor-2","value":12.0,"timestamp":1541623771000}

KSQL> PRINT 'indicator' FROM BEGINNING;
Format:JSON
{"ROWTIME":1541624851692,"ROWKEY":"null","informationPath":"simulation/informationString","WorkpieceID":"0020181","timestamp":1541623171000}
{"ROWTIME":1541624851692,"ROWKEY":"null","informationPath":"simulation/informationString","WorkpieceID":"0020182","timestamp":1541623771000}

现在,我们注册要在KSQL中使用的主题,并声明架构:

ksql> CREATE STREAM SENSOR (SENSORPATH VARCHAR, VALUE DOUBLE, TIMESTAMP BIGINT) WITH (VALUE_FORMAT='JSON',KAFKA_TOPIC='sensor',TIMESTAMP='timestamp');

Message
----------------
Stream created
----------------
ksql> CREATE STREAM INDICATOR (INFORMATIONPATH VARCHAR, WORKPIECEID VARCHAR, TIMESTAMP BIGINT) WITH (VALUE_FORMAT='JSON',KAFKA_TOPIC='indicator',TIMESTAMP='timestamp');

Message
----------------
Stream created
----------------

我们可以查询已创建的KSQL流:

ksql> SET 'auto.offset.reset' = 'earliest';
ksql> SELECT ROWTIME, timestamp, TIMESTAMPTOSTRING(ROWTIME, 'yyyy-MM-dd HH:mm:ss Z'), TIMESTAMPTOSTRING(timestamp, 'yyyy-MM-dd HH:mm:ss Z') , sensorpath, value FROM sensor;
1541623171000 | 1541623171000 | 2018-11-07 20:39:31 +0000 | 2018-11-07 20:39:31 +0000 | simulation/machine/plc/sensor-1 | 7.0
1541623171000 | 1541623171000 | 2018-11-07 20:39:31 +0000 | 2018-11-07 20:39:31 +0000 | simulation/machine/plc/sensor-2 | 2.0
1541623231000 | 1541623231000 | 2018-11-07 20:40:31 +0000 | 2018-11-07 20:40:31 +0000 | simulation/machine/plc/sensor-1 | 6.0
1541623231000 | 1541623231000 | 2018-11-07 20:40:31 +0000 | 2018-11-07 20:40:31 +0000 | simulation/machine/plc/sensor-2 | 1.0
1541623771000 | 1541623771000 | 2018-11-07 20:49:31 +0000 | 2018-11-07 20:49:31 +0000 | simulation/machine/plc/sensor-1 | 10.0
1541623771000 | 1541623771000 | 2018-11-07 20:49:31 +0000 | 2018-11-07 20:49:31 +0000 | simulation/machine/plc/sensor-2 | 12.0

ksql> SELECT ROWTIME, timestamp, TIMESTAMPTOSTRING(ROWTIME, 'yyyy-MM-dd HH:mm:ss Z'), TIMESTAMPTOSTRING(timestamp, 'yyyy-MM-dd HH:mm:ss Z') , informationPath, WorkpieceID FROM indicator;
1541623171000 | 1541623171000 | 2018-11-07 20:39:31 +0000 | 2018-11-07 20:39:31 +0000 | simulation/informationString | 0020181
1541623771000 | 1541623771000 | 2018-11-07 20:49:31 +0000 | 2018-11-07 20:49:31 +0000 | simulation/informationString | 0020182

请注意,STREAM的ROWTIMEROWTIME输出中的PRINT不同。这是因为PRINT的输出显示了Kafka消息的时间戳,而在STREAM中,我们覆盖了WITH子句中的时间戳,而是使用了消息有效负载本身中的timestamp列。

要加入这两个主题,我们将做两件事:

  1. 创建一个可以将它们连接到的人工密钥,因为当前数据中不存在任何人工密钥。我们还将将此新列用作Kafka消息的键(进行连接是必需的)。
  2. 将“指标”事件流建模为KSQL 。这使我们能够根据时间戳查询WorkpieceID值的当前状态

要添加人工联接键,只需选择一个常量并使用AS子句对其进行别名,然后将其用作带有PARTITION BY的消息键:

ksql> CREATE STREAM SENSOR_KEYED AS SELECT sensorPath, value, 'X' AS JOIN_KEY FROM sensor PARTITION BY JOIN_KEY;

Message
----------------------------
Stream created and running
----------------------------

有兴趣的话,我们可以检查所产生的Kafka主题

ksql> PRINT SENSOR_KEYED FROM BEGINNING;
Format:JSON
{"ROWTIME":1541623171000,"ROWKEY":"X","SENSORPATH":"simulation/machine/plc/sensor-1","VALUE":7.0,"JOIN_KEY":"X"}
{"ROWTIME":1541623171000,"ROWKEY":"X","SENSORPATH":"simulation/machine/plc/sensor-2","VALUE":2.0,"JOIN_KEY":"X"}
{"ROWTIME":1541623231000,"ROWKEY":"X","SENSORPATH":"simulation/machine/plc/sensor-1","VALUE":6.0,"JOIN_KEY":"X"}
{"ROWTIME":1541623231000,"ROWKEY":"X","SENSORPATH":"simulation/machine/plc/sensor-2","VALUE":1.0,"JOIN_KEY":"X"}
{"ROWTIME":1541623771000,"ROWKEY":"X","SENSORPATH":"simulation/machine/plc/sensor-1","VALUE":10.0,"JOIN_KEY":"X"}
{"ROWTIME":1541623771000,"ROWKEY":"X","SENSORPATH":"simulation/machine/plc/sensor-2","VALUE":12.0,"JOIN_KEY":"X"}

请注意,ROWKEY现在是JOIN_KEY,而不是PRINT 'sensor'输出中的上述NULL。如果省略PARTITION BY,则会添加JOIN_KEY,但消息保持未加密状态,这不是我们希望联接能够正常工作的条件。

现在我们也重新输入指标数据:

ksql> CREATE STREAM INDICATOR_KEYED AS SELECT informationPath, WorkpieceID, 'X' as JOIN_KEY FROM indicator PARTITION BY JOIN_KEY;

Message
----------------------------
Stream created and running
----------------------------
ksql> PRINT 'INDICATOR_KEYED' FROM BEGINNING;
Format:JSON
{"ROWTIME":1541623171000,"ROWKEY":"X","INFORMATIONPATH":"simulation/informationString","WORKPIECEID":"0020181","JOIN_KEY":"X"}
{"ROWTIME":1541623771000,"ROWKEY":"X","INFORMATIONPATH":"simulation/informationString","WORKPIECEID":"0020182","JOIN_KEY":"X"}

我们已经重新设置了指标数据的密钥,现在我们可以将其注册为KSQL表。在表中,键的状态由KSQL返回,而不是每个事件。我们正在使用这种方法根据时间戳确定与传感器读数关联的WorkpieceID

ksql> CREATE TABLE INDICATOR_STATE (JOIN_KEY VARCHAR, informationPath varchar, WorkpieceID varchar) with (value_format='json',kafka_topic='INDICATOR_KEYED',KEY='JOIN_KEY');

Message
---------------
Table created
---------------

查询表将显示一个值,即 current 状态:

ksql> SELECT * FROM INDICATOR_STATE;
1541623771000 | X | X | simulation/informationString | 0020182

如果此时您向indicator主题发送了另一条消息,则表的状态将更新,并且您会看到SELECT发出的新行。

最后,我们可以进行流表连接,并坚持到一个新主题:

ksql> CREATE STREAM SENSOR_ENRICHED AS SELECT S.SENSORPATH, TIMESTAMPTOSTRING(S.ROWTIME, 'yyyy-MM-dd HH:mm:ss Z') AS SENSOR_TIMESTAMP, S.VALUE, I.WORKPIECEID FROM SENSOR_KEYED S LEFT JOIN INDICATOR_STATE I ON S.JOIN_KEY=I.JOIN_KEY;

Message
----------------------------
Stream created and running
----------------------------

检查新流:

ksql> DESCRIBE SENSOR_ENRICHED;

Name                 : SENSOR_ENRICHED
Field            | Type
----------------------------------------------
ROWTIME          | BIGINT           (system)
ROWKEY           | VARCHAR(STRING)  (system)
SENSORPATH       | VARCHAR(STRING)
SENSOR_TIMESTAMP | VARCHAR(STRING)
VALUE            | DOUBLE
WORKPIECEID      | VARCHAR(STRING)
----------------------------------------------
For runtime statistics and query details run: DESCRIBE EXTENDED <Stream,Table>;

查询新流:

ksql> SELECT SENSORPATH, SENSOR_TIMESTAMP, VALUE, WORKPIECEID FROM SENSOR_ENRICHED;
simulation/machine/plc/sensor-1 | 2018-11-07 20:39:31 +0000 | 7.0 | 0020181
simulation/machine/plc/sensor-2 | 2018-11-07 20:39:31 +0000 | 2.0 | 0020181
simulation/machine/plc/sensor-1 | 2018-11-07 20:40:31 +0000 | 6.0 | 0020181
simulation/machine/plc/sensor-2 | 2018-11-07 20:40:31 +0000 | 1.0 | 0020181
simulation/machine/plc/sensor-1 | 2018-11-07 20:49:31 +0000 | 10.0 | 0020182
simulation/machine/plc/sensor-2 | 2018-11-07 20:49:31 +0000 | 12.0 | 0020182

由于这是KSQL,SENSOR_ENRICHED流(以及同名的基础主题)将不断填充,由到达sensor主题的事件驱动,并反映基于所发送事件的任何状态变化到indicator主题。