假设我有一个我使用Docker从confluentinc/cp-kafka-connect
映像中构建的Kafka Connect worker,并将其部署到服务器并旋转了worker。现在大多数时候,该连接器将已经存在,因为我已经通过在端口8083上使用POST的REST API调用创建了该连接器。但是,在工作程序启动时如何通过脚本创建连接器(如果尚不存在) ?我可以以某种方式让我的工人在启动后继续运行吗?
答案 0 :(得分:1)
它需要重载的command
原始问题:https://github.com/confluentinc/cp-docker-images/issues/467
解决方案
volumes:
- $PWD/scripts:/scripts # TODO: Create this folder ahead of time, on your host
command:
- bash
- -c
- |
/etc/confluent/docker/run &
echo "Waiting for Kafka Connect to start listening on kafka-connect ⏳"
while [ $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) -eq 000 ] ; do
echo -e $$(date) " Kafka Connect listener HTTP state: " $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) " (waiting for 200)"
sleep 5
done
nc -vz kafka-connect 8083
echo -e "\n--\n+> Creating Kafka Connector(s)"
/scripts/create-connectors.sh # Note: This script is stored externally from container
sleep infinity
答案 1 :(得分:1)
正如cricket_007所说,您可以通过调用已装入的脚本将其嵌入命令中,也可以将它们全部内联like this example。如果您执行此操作,请注意,在命令部分,$
被$$
替换,以避免出现错误Invalid interpolation format for "command" option
kafka-connect-01:
image: confluentinc/cp-kafka-connect:5.4.0
[…]
command:
- bash
- -c
- |
[…]
echo "Launching Kafka Connect worker"
/etc/confluent/docker/run &
#
echo "Waiting for Kafka Connect to start listening on localhost ⏳"
while : ; do
curl_status=$$(curl -s -o /dev/null -w %{http_code} http://localhost:8083/connectors)
echo -e $$(date) " Kafka Connect listener HTTP state: " $$curl_status " (waiting for 200)"
if [ $$curl_status -eq 200 ] ; then
break
fi
sleep 5
done
echo -e "\n--\n+> Creating Data Generator source"
curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/connectors/source-datagen-01/config \
-d '{
"connector.class": "io.confluent.kafka.connect.datagen.DatagenConnector",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"kafka.topic": "ratings",
"max.interval":750,
"quickstart": "ratings",
"tasks.max": 1
}'
sleep infinity
答案 2 :(得分:0)
如果您将Ansible之类的工具用于自动化,则此配置可能会有用:
- hosts: kafka-connect-docker
name: deploy kafka connect cluster
become: yes
gather_facts: yes
serial: '{{ serial|default(1) }}'
tasks:
# it's not fully working example
...
- name: run container
notify: wait ports
docker_container:
name: kafka-connect
image: "{{ docker_registry }}/kafka-connect:2.4.0-1.3.0"
entrypoint: ["sh", "-c", "'exec /opt/kafka/bin/connect-distributed.sh /etc/kafka-connect/connect-distributed.properties >> /var/log/kafka-connect/stderrout.log 2>&1'"]
restart_policy: always
network_mode: host
state: started
- name: call wait ports
command: /bin/true
notify: wait ports
handlers:
- name: restart container
shell: docker restart kafka-connect
notify: wait ports
- name: wait ports
wait_for: port=10900 timeout=300 host=127.0.0.1
changed_when: True
notify: check cluster status
- name: check cluster status
uri:
url: "http://127.0.0.1:10900/connectors"
status_code: 200
register: cluster_status_json_response
until: cluster_status_json_response.status == 200
retries: 60
delay: 5
- hosts: kafka-connect-docker[0]
name: deploy connectors configs
become: yes
tasks:
- name: restore connectors configs
uri:
url: "http://127.0.0.1:10900/connectors/{{ item }}/config"
method: PUT
return_content: yes
body_format: json
headers:
Accept: "application/json"
Content-Type: "application/json"
body: "{{ lookup('template', 'roles/kafka-connect/templates/etc/kafka-connect/tasks/' + item + '.json') }}"
status_code: 200, 201
timeout: 60
with_items: "{{ connector_configs }}"