从sql-client.sh提交的Flink作业,如何从保存点恢复?

时间:2021-03-18 17:45:45

标签: apache-flink flink-sql

我从 apache-flink sql-client 提交了一个作业,并创建了一个保存点。 问题是元数据不包含 jar 和类名,更不用说参数了。如何重启?

2 个答案:

答案 0 :(得分:0)

flink run -s hdfs://ns/flink/flink-checkpoints/savepoint-c5dade-af74904ab30c -m yarn-cluster -yid application_1539849585041_0459 -c org.apache.flink.table.client.SqlClient opt/flink-sql -client-1.6.1.jar 内嵌 -e /opt/flink/flink-bin/yarn-app/a/sql-client-kafka-json.yaml --library /opt/flink/flink-bin//lib - u 'INSERT INTO testjsonSink SELECT * FROM testjsonSource;'`

调整 jars 库。像以前一样解决了。糟糕的文档。

答案 1 :(得分:0)

正式使用 1.12.1 和 Scala 1.12: flink run -s hdfs://dbt1caw005.webex.com:9000/flink-checkpoints/savepoint-dafd7c-05d66b098493 -C file:///opt/flink/jars/flink-python_2.12-1.12.1.jar - c org.apache.flink.table.client.SqlClient /opt/flink/opt/flink-sql-client_2.12-1.12.1.jar 内嵌 -e /vdb/sql.yml -l /opt/flink/jars - u "INSERT INTO CALL_DURATION_USER SELECT orgId, userId, window_start, window_end, total_minutes, total_calls FROM ( SELECT , ROW_NUMBER() OVER (PARTITION BY orgId,window_end ORDER BY total_minutes desc) AS rownum FROM ( SELECT orgId, user_START_HOP_START ts, INTERVAL '1' DAY,INTERVAL '30' DAY) window_start, HOP_END(ts, INTERVAL '1' DAY,INTERVAL '30' DAY) window_end, CAST(sum(cast(legDuration as bigint)/60) AS BIGINT) total_minutes, CAST(count() AS BIGINT) total_calls FROM callduration_ts GROUP BY HOP(ts, INTERVAL '1' DAY,INTERVAL '30' DAY),orgId, userId ) ) WHERE rownum < 101"