puppet notify xinetd没有重新加载xinetd服务

时间:2016-05-25 10:04:34

标签: service puppet init.d xinetd check-mk

我正在尝试使用标准的check_mk xinetd配置文件通过Debian 7服务器上的puppet安装check_mk代理。

Check_mk安装没有问题,但是我遇到了xinetd配置的问题。

当我更改puppet master上源配置文件中的端口并在客户端主机上运行puppet agent -t时,新配置已正确部署,但puppet不会重新加载xinetd服务,因为系统可以&# 39; t识别xinetd服务的状态。

木偶清单看起来像这样:

    class basic::check-mk {
case $operatingsystem {
  debian: {
         package {'check-mk-agent':
                 ensure => present,
                 }

         file    { '/etc/xinetd.d/check_mk':
                 notify => Service['xinetd'],
                 ensure => file,
                 source => 'puppet:///modules/basic/etc--xinetd--checkmk',
                 mode   => '0644',
                 }

         service { 'xinetd':
                 ensure  => running,
                 enable  => true,
                 restart => '/etc/init.d/xinetd reload',
                 }
          }
 }
}

调试如下所示:

    info: Applying configuration version '1464186485'
debug: /Stage[main]/Ntp::Config/notify: subscribes to Class[Ntp::Service]
debug: /Stage[main]/Ntp/Anchor[ntp::begin]/before: requires Class[Ntp::Install]
debug: /Stage[main]/basic::Check-mk/Service[xinetd]/subscribe: subscribes to File[/etc/xinetd.d/check_mk]
debug: /Stage[main]/Ntp::Install/before: requires Class[Ntp::Config]
debug: /Stage[main]/Ntp::Service/before: requires Anchor[ntp::end]
debug: /Schedule[daily]: Skipping device resources because running on a host
debug: /Schedule[monthly]: Skipping device resources because running on a host
debug: /Schedule[hourly]: Skipping device resources because running on a host
debug: Prefetching apt resources for package
debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n''
debug: Puppet::Type::Package::ProviderApt: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n''
debug: /Schedule[never]: Skipping device resources because running on a host
debug: file_metadata supports formats: b64_zlib_yaml pson raw yaml; using pson
debug: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]/content: Executing 'diff -u /etc/xinetd.d/check_mk /tmp/puppet-file20160525-10084-1vrr8zf-0'
notice: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]/content:
--- /etc/xinetd.d/check_mk      2016-05-25 14:57:26.220873468 +0200
+++ /tmp/puppet-file20160525-10084-1vrr8zf-0    2016-05-25 16:28:06.393363702 +0200
@@ -25,7 +25,7 @@
 service check_mk
 {
         type           = UNLISTED
-        port           = 6556
+        port           = 6554
         socket_type    = stream
         protocol       = tcp
         wait           = no

debug: Finishing transaction 70294357735140
info: FileBucket got a duplicate file {md5}cb0264ad1863ee2b3749bd3621cdbdd0
info: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]: Filebucketed /etc/xinetd.d/check_mk to puppet with sum cb0264ad1863ee2b3749bd3621cdbdd0
notice: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]/content: content changed '{md5}cb0264ad1863ee2b3749bd3621cdbdd0' to '{md5}56ac5c1a50c298de4999649b27ef6277'
debug: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]: The container Class[basic::Check-mk] will propagate my refresh event
info: /Stage[main]/basic::Check-mk/File[/etc/xinetd.d/check_mk]: Scheduling refresh of Service[xinetd]
debug: Service[ntp](provider=debian): Executing '/etc/init.d/ntp status'
debug: Service[xinetd](provider=debian): Executing '/etc/init.d/xinetd status'
debug: Service[xinetd](provider=debian): Executing '/etc/init.d/xinetd start'
notice: /Stage[main]/basic::Check-mk/Service[xinetd]/ensure: ensure changed 'stopped' to 'running'
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: The container Class[basic::Check-mk] will propagate my refresh event
debug: Service[xinetd](provider=debian): Executing '/etc/init.d/xinetd status'
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: Skipping restart; service is not running
notice: /Stage[main]/basic::Check-mk/Service[xinetd]: Triggered 'refresh' from 1 events
debug: /Stage[main]/basic::Check-mk/Service[xinetd]: The container Class[basic::Check-mk] will propagate my refresh event
debug: Class[basic::Check-mk]: The container Stage[main] will propagate my refresh event
debug: /Schedule[weekly]: Skipping device resources because running on a host
debug: /Schedule[puppet]: Skipping device resources because running on a host
debug: Finishing transaction 70294346109840
debug: Storing state
debug: Stored state in 0.01 seconds
notice: Finished catalog run in 1.43 seconds
debug: Executing '/etc/puppet/etckeeper-commit-post'
debug: report supports formats: b64_zlib_yaml pson raw yaml; using pson

以下行似乎对我很怀疑:

debug: /Stage[main]/basic::Check-mk/Service[xinetd]: Skipping restart; service is not running

service --status-all[ ? ] xinetd。为什么系统无法识别服务状态?

2 个答案:

答案 0 :(得分:1)

您的调试日志和手动service命令的输出表明您的xinetd没有正常工作status子命令。因此,Puppet不知道如何(或是否)管理其运行状态。

您可以考虑修复initscript以识别status子命令并进行符合LSB的响应(或者至少在服务运行时以代码0退出,否则返回其他任何内容)。或者,您可以向Service资源添加status attribute,并提供Puppet可用于确定服务运行状态的替代命令。 (我已经链接到了当前的文档,但我非常确定{Puppet 2.7之前Service具有该属性。)

答案 1 :(得分:0)

已解决:要解决此问题,我必须将状态部分添加到xinetd的init.d脚本中。之后service xinetd status和木偶能够识别服务的状态。添加的部分如下所示:

status)
    if pidof xinetd > /dev/null
    then
      echo "xinetd is running."
      exit 0
    else
      echo "xinetd is NOT running."
      exit 1
    fi
;;

另外,我将状态选项添加到使用行:

    *)
    echo "Usage: /etc/init.d/xinetd {start|stop|reload|force-reload|restart|status}"
    exit 1
    ;;

这解决了这个问题。