使用不同类型的JSON值主题定义KSQL STRUCT

时间:2018-11-22 21:59:00

标签: apache-kafka apache-kafka-streams ksql

编辑:进行细微的编辑以更好地反映意图,但是由于取得了进展,因此进行了较大的编辑。)

为主题"t_raw"提供了多种类型的消息,它们都包含一个公用的"type"密钥:

{"type":"key1","data":{"ts":"2018-11-20 19:20:21.1","a":1,"b":"hello"}}
{"type":"key2","data":{"ts":"2018-11-20 19:20:22.2","a":1,"c":11,"d":"goodbye"}}
{"type":"key1","data":{"ts":"2018-11-20 19:20:23.3","a":2,"b":"hello2"}}
{"type":"key2","data":{"ts":"2018-11-20 19:20:24.4","a":3,"c":22,"d":"goodbye2"}}

最终,我需要将其拆分为其他流,在其中将对其进行切碎/汇总/处理。我希望能够对所有内容使用STRUCT,但是我目前的工作是让我这样做:

create stream raw (type varchar, data varchar) \
  with (kafka_topic='t_raw', value_format='JSON');

第一个级别,然后

create stream key1 with (TIMESTAMP='ts', timestamp_format='yyyy-MM-dd HH:mm:ss.S') as \
  select \
    extractjsonfield(data, '$.ts') as ts, \
    extractjsonfield(data, '$.a') as a, extractjsonfield(data, '$.b') as b \
  from raw where type='key1';
create stream key2 with (TIMESTAMP='ts', timestamp_format='yyyy-MM-dd HH:mm:ss.S') as \
  select \
    extractjsonfield(data, '$.ts') as ts, \
    extractjsonfield(data, '$.a') as a, extractjsonfield(data, '$.c') as c, \
    extractjsonfield(data, '$.d') as d \
  from raw where type='key2';

这似乎可行,但是最近添加了STRUCT,有没有办法像上面一样使用它来代替extractjsonfield

ksql> select * from key1;
1542741621100 | null | 2018-11-20 19:20:21.1 | 1 | hello
1542741623300 | null | 2018-11-20 19:20:23.3 | 2 | hello2
^CQuery terminated
ksql> select * from key2;
1542741622200 | null | 2018-11-20 19:20:22.2 | 1 | 11 | goodbye
1542741624400 | null | 2018-11-20 19:20:24.4 | 3 | 22 | goodbye2

如果不使用STRUCT,是否可以通过简单的方法来使用香草kafka流(副ksql,ergo 标签)?

是否有更kafka式/高效/优雅的方式来解析? 我不能将其定义为空的STRUCT<>

ksql> CREATE STREAM some_input ( type VARCHAR, data struct<> ) \
      WITH (KAFKA_TOPIC='t1', VALUE_FORMAT='JSON');
line 1:52: extraneous input '<>' expecting {',', ')'}

some (not-so-recent) discussion可以做类似的事情

CREATE STREAM key1 ( a INT, b VARCHAR ) AS \
  SELECT data->* from some_input where type = 'key1';

仅供参考:以上解决方案在confluent-5.0.0中无法使用,a recent patch修复了extractjsonfield错误并启用了此解决方案。

实际数据还具有其他几种相似的消息类型。它们都包含"type""data"键(顶层没有其他键),几乎所有键都嵌套在"ts"中的"data"时间戳。

1 个答案:

答案 0 :(得分:2)

是的,您可以执行此操作-KSQL并不介意是否不存在任何列,您只得到一个null值。

测试数据设置

在主题中填充一些测试数据:

kafkacat -b kafka:29092 -t t_raw -P <<EOF
{"type":"key1","data":{"ts":"2018-11-20 19:20:21.1","a":1,"b":"hello"}}
{"type":"key2","data":{"ts":"2018-11-20 19:20:22.2","a":1,"c":11,"d":"goodbye"}}
{"type":"key1","data":{"ts":"2018-11-20 19:20:23.3","a":2,"b":"hello2"}}
{"type":"key2","data":{"ts":"2018-11-20 19:20:24.4","a":3,"c":22,"d":"goodbye2"}}
EOF

将主题转储到KSQL控制台进行检查:

ksql> PRINT 't_raw' FROM BEGINNING;
Format:JSON
{"ROWTIME":1542965737436,"ROWKEY":"null","type":"key1","data":{"ts":"2018-11-20 19:20:21.1","a":1,"b":"hello"}}
{"ROWTIME":1542965737436,"ROWKEY":"null","type":"key2","data":{"ts":"2018-11-20 19:20:22.2","a":1,"c":11,"d":"goodbye"}}
{"ROWTIME":1542965737436,"ROWKEY":"null","type":"key1","data":{"ts":"2018-11-20 19:20:23.3","a":2,"b":"hello2"}}
{"ROWTIME":1542965737437,"ROWKEY":"null","type":"key2","data":{"ts":"2018-11-20 19:20:24.4","a":3,"c":22,"d":"goodbye2"}}
^CTopic printing ceased
ksql>

对数据源流进行建模

在其上创建流。请注意STRUCT的使用以及每个可能的列的引用:

CREATE STREAM T (TYPE VARCHAR, \
                 DATA STRUCT< \
                      TS VARCHAR, \
                      A INT, \
                      B VARCHAR, \
                      C INT, \
                      D VARCHAR>) \
        WITH (KAFKA_TOPIC='t_raw',\
              VALUE_FORMAT='JSON');

将offset设置为最早,以便我们查询整个主题,然后使用KSQL访问完整的流:

ksql> SET 'auto.offset.reset' = 'earliest';
Successfully changed local property 'auto.offset.reset' from 'null' to 'earliest'
ksql>
ksql> SELECT * FROM T;
1542965737436 | null | key1 | {TS=2018-11-20 19:20:21.1, A=1, B=hello, C=null, D=null}
1542965737436 | null | key2 | {TS=2018-11-20 19:20:22.2, A=1, B=null, C=11, D=goodbye}
1542965737436 | null | key1 | {TS=2018-11-20 19:20:23.3, A=2, B=hello2, C=null, D=null}
1542965737437 | null | key2 | {TS=2018-11-20 19:20:24.4, A=3, B=null, C=22, D=goodbye2}
^CQuery terminated

使用->运算符访问嵌套的元素,分别查询类型:

ksql> SELECT DATA->A,DATA->B FROM T WHERE TYPE='key1'  LIMIT 2;
1 | hello
2 | hello2

ksql> SELECT DATA->A,DATA->C,DATA->D FROM T WHERE TYPE='key2' LIMIT 2;
1 | 11 | goodbye
3 | 22 | goodbye2

将数据保留在单独的Kafka主题中:

用分离的数据填充目标主题:

ksql> CREATE STREAM TYPE_1 AS SELECT DATA->TS, DATA->A, DATA->B FROM T WHERE TYPE='key1';

Message
----------------------------
Stream created and running
----------------------------
ksql> CREATE STREAM TYPE_2 AS SELECT DATA->TS, DATA->A, DATA->C, DATA->D FROM T WHERE TYPE='key2';

Message
----------------------------
Stream created and running
----------------------------

新流的模式:

ksql> DESCRIBE TYPE_1;

Name                 : TYPE_1
Field    | Type
--------------------------------------
ROWTIME  | BIGINT           (system)
ROWKEY   | VARCHAR(STRING)  (system)
DATA__TS | VARCHAR(STRING)
DATA__A  | INTEGER
DATA__B  | VARCHAR(STRING)
--------------------------------------
For runtime statistics and query details run: DESCRIBE EXTENDED <Stream,Table>;
ksql> DESCRIBE TYPE_2;

Name                 : TYPE_2
Field    | Type
--------------------------------------
ROWTIME  | BIGINT           (system)
ROWKEY   | VARCHAR(STRING)  (system)
DATA__TS | VARCHAR(STRING)
DATA__A  | INTEGER
DATA__C  | INTEGER
DATA__D  | VARCHAR(STRING)
--------------------------------------

主题是每个KSQL流的基础:

ksql> LIST TOPICS;

Kafka Topic                 | Registered | Partitions | Partition Replicas | Consumers | ConsumerGroups
---------------------------------------------------------------------------------------------------------
t_raw                       | true       | 1          | 1                  | 2         | 2
TYPE_1                      | true       | 4          | 1                  | 0         | 0
TYPE_2                      | true       | 4          | 1                  | 0         | 0
---------------------------------------------------------------------------------------------------------