更新isolation
的属性(mesos-slave
)后,无法重新注册:
6868 status_update_manager.cpp:177] Pausing sending status updates
6877 slave.cpp:915] New master detected at master@192.168.1.1:5050
6867 status_update_manager.cpp:177] Pausing sending status updates
6877 slave.cpp:936] No credentials provided. Attempting to register without authentication
6877 slave.cpp:947] Detecting new master
6869 slave.cpp:1217] Re-registered with master master@192.168.1.1:5050
6866 status_update_manager.cpp:184] Resuming sending status updates
6869 slave.cpp:1253] Forwarding total oversubscribed resources {}
6874 slave.cpp:4141] Master marked the agent as disconnected but the agent considers itself registered! Forcing re-registration.
6874 slave.cpp:904] Re-detecting master
6874 slave.cpp:947] Detecting new master
6874 status_update_manager.cpp:177] Pausing sending status updates
6869 status_update_manager.cpp:177] Pausing sending status updates
6871 slave.cpp:915] New master detected at master@192.168.1.1:5050
6871 slave.cpp:936] No credentials provided. Attempting to register without authentication
6871 slave.cpp:947] Detecting new master
6872 slave.cpp:1217] Re-registered with master master@192.168.1.1:5050
6872 slave.cpp:1253] Forwarding total oversubscribed resources {}
6871 status_update_manager.cpp:184] Resuming sending status updates
6871 slave.cpp:4141] Master marked the agent as disconnected but the agent considers itself registered! Forcing re-registration.
似乎陷入无限循环。知道如何开始新奴隶吗?我尝试删除work_dir
并重新启动mesos-slave
进程,但没有成功。
这种情况是由于work_dir
的意外重命名造成的。重新启动mesos-slave
后,它无法重新连接或终止正在运行的任务。我试图在奴隶身上使用cleanup
:
echo 'cleanup' > /etc/mesos-slave/recover
service mesos-slave restart
# after recovery finishes
rm /etc/mesos-slave/recover
service mesos-slave restart
这部分有帮助,但Marathon中仍有许多僵尸任务,因为Mesos master无法检索有关该任务的任何信息。当我查看指标时,我发现有些奴隶被标记为"无效"。
主日志中的更新:显示在:
之后Cannot kill task service_mesos-kafka_kafka.e0e3e128-ef0e-11e6-af93-fead7f32c37c
of framework ecd3a4be-d34c-46f3-b358-c4e26ac0d131-0000 (marathon) at
scheduler-e76665b1-de85-48a3-b9fd-5e736b64a9d8@192.168.1.10:52192
because the agent cac09818-0d75-46a9-acb1-4e17fdb9e328-S10 at
slave(1)@192.168.1.1:5051 (w10.example.net) is disconnected.
Kill will be retried if the agent re-registers
重新启动当前mesos-master
后:
Cannot kill task service_mesos-kafka_kafka.e0e3e128-ef0e-11e6-af93-fead7f32c37c
of framework ecd3a4be-d34c-46f3-b358-c4e26ac0d131-0000 (marathon)
at scheduler-9e9753be-99ae-40a6-ab2f-ad7834126c33@192.168.1.10:39972
because it is unknown; performing reconciliation
Performing explicit task state reconciliation for 1 tasks
of framework ecd3a4be-d34c-46f3-b358-c4e26ac0d131-0000 (marathon)
at scheduler-9e9753be-99ae-40a6-ab2f-ad7834126c33@192.168.1.10:39972
Dropping reconciliation of task service_mesos-kafka_kafka.e0e3e128-ef0e-11e6-af93-fead7f32c37c
for framework ecd3a4be-d34c-46f3-b358-c4e26ac0d131-0000 (marathon)
at scheduler-9e9753be-99ae-40a6-ab2f-ad7834126c33@192.168.1.10:39972
because there are transitional agents
答案 0 :(得分:1)
裂脑情况是由多个work_dir
造成的。在大多数情况下,从错误的work_dir
:
mv /tmp/mesos/slaves/* /var/lib/mesos/slaves/
然后强制重新注册:
rm -rf /var/lib/mesos/meta/slaves/latest
service mesos-slave restart
目前正在运行的任务无法生存(无法恢复)。旧执行程序的任务应标记为TASK_LOST
并计划进行清理。这将避免僵尸任务的问题,Mesos无法杀死(因为它们在不同的work_dir
中运行)。
如果mesos-slave
仍然注册为非活动状态,请重新启动当前的Mesos主服务器。