关于Apache Storm拓扑的Sigar UnsatisfiedLinkError

时间:2019-03-13 15:37:25

标签: java apache-storm sigar

我正在配置为单个本地群集的计算机上部署Storm拓扑。我已将conf/storm.yaml配置为使用storm.scheduler: "org.apache.storm.scheduler.resource.ResourceAwareScheduler"。拓扑已成功部署。但是,我从Sigar库收到一个错误,该错误表明无法获取进程ID以在拓扑上使用CPUMetric。这是我的拓扑的配置,用于获取指标:

config.registerMetricsConsumer(LoggingMetricsConsumer.class);
Map<String, String> workerMetrics = new HashMap<String, String>();
workerMetrics.put("CPU", "org.apache.storm.metrics.sigar.CPUMetric");
config.put(Config.TOPOLOGY_WORKER_METRICS, workerMetrics);

我已经将sigar-1.6.4.jarstorm-metrics-1.2.2.jar库复制到apache-storm/lib文件夹中。这是错误:

2019-03-13 16:24:45.920 o.a.s.util Thread-10-__system-executor[-1 -1] [ERROR] Async loop died!
java.lang.UnsatisfiedLinkError: org.hyperic.sigar.Sigar.getPid()J
    at org.hyperic.sigar.Sigar.getPid(Native Method) ~[sigar-1.6.4.jar:?]
    at org.apache.storm.metrics.sigar.CPUMetric.<init>(CPUMetric.java:38) ~[storm-metrics-1.2.2.jar:1.2.2]
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_191]
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_191]
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_191]
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_191]
    at java.lang.Class.newInstance(Class.java:442) ~[?:1.8.0_191]
    at org.apache.storm.utils.Utils.newInstanceImpl(Utils.java:198) ~[storm-core-1.2.2.jar:1.2.2]
    at org.apache.storm.utils.Utils.newInstance(Utils.java:192) ~[storm-core-1.2.2.jar:1.2.2]
    at org.apache.storm.utils.Utils.newInstance(Utils.java:185) ~[storm-core-1.2.2.jar:1.2.2]
    at org.apache.storm.metric.SystemBolt.registerMetrics(SystemBolt.java:150) ~[storm-core-1.2.2.jar:1.2.2]
    at org.apache.storm.metric.SystemBolt.prepare(SystemBolt.java:143) ~[storm-core-1.2.2.jar:1.2.2]
    at org.apache.storm.daemon.executor$fn__10795$fn__10808.invoke(executor.clj:803) ~[storm-core-1.2.2.jar:1.2.2]
    at org.apache.storm.util$async_loop$fn__553.invoke(util.clj:482) [storm-core-1.2.2.jar:1.2.2]
    at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]

1 个答案:

答案 0 :(得分:1)

由于某些原因,Sigar的本机部分不在工人的类路径上。本地文件位于资源目​​录中的storm-metrics.jar中。

为什么要手动将风暴指标复制到风暴/库?确保存在拓扑依赖性的最简单方法是使用maven-shade-plugin制作胖子。看看Storm-starter如何做到https://github.com/apache/storm/blob/master/examples/storm-starter/pom.xml#L153

我将检查工作程序的日志,以验证storm-metrics.jar文件是否在工作程序进程的类路径上。工人在引导过程中很早就打印其类路径,主管也是如此。

您提到您正在单个本地群集上运行。我不确定您是要安装Storm的单节点安装还是要使用LocalCluster(或等效的title: "Untitled" author: "makis" date: "March 13, 2019" output: word_document reference_docx: new.docx header-includes: - \usepackage{fancyhdr} - \usepackage{lipsum} - \pagestyle{fancy} - \fancyhead[CO,CE]{This is fancy header} - \fancyfoot[CO,CE]{And this is a fancy footer} - \fancyfoot[LE,RO]{\thepage} --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE) ``` 命令)。如果您要使用LocalCluster,则需要添加风暴指标作为拓扑项目的依赖项。