我们总是收到错误:
原因:net.opentsdb.uid.NoSuchUniqueName:“度量”没有这样的名称:net.opentsdb.uid.UniqueId $ 1GetIdCB.call(UniqueId.java:450)〜[tsdb-2.4。 0.jar:]在net.opentsdb.uid.UniqueId $ 1GetIdCB.call(UniqueId.java:447)〜[tsdb-2.4.0.jar:] ...省略34个共同帧
错误[AsyncHBase I / O工作者#13] UniqueId:尝试为指标分配UID的尝试#1失败:在步骤#2进行测试 org.hbase.async.RemoteException:org.apache.hadoop.hbase.DoNotRetryIOException:java.lang.NoClassDefFoundError:无法初始化org.apache.hadoop.hbase.shaded.protobufotobufUtil $ ClassLoaderHolder类
我看到网页通常是错误的,如果缺少一些的三个参数:
tsd.core.auto_create_metrics = true
tsd.core.auto_create_tagks = true
tsd.core.auto_create_tagvs = true
我们正在将数据发送到Open TSDB。
echo "put test 1548838714 1 tag1=1" | nc 192.168.150.101 4243
我们还注意到有时在尝试执行echo命令时有时会出错(如果正在使用build / tsdb tsd而不是通过/etc/init.d/opentsdb来运行OpenTSDB(通过使用命令service opentsdb start来运行):>
这是配置文件:
# --------- NETWORK ----------
# The TCP port TSD should use for communications
# *** REQUIRED ***
tsd.network.port = 4243
# The IPv4 network address to bind to, defaults to all addresses
# tsd.network.bind = 0.0.0.0
# Enables Nagel's algorithm to reduce the number of packets sent over the
# network, default is True
#tsd.network.tcpnodelay = true
# Determines whether or not to send keepalive packets to peers, default
# is True
#tsd.network.keepalive = true
# Determines if the same socket should be used for new connections, default
# is True
#tsd.network.reuseaddress = true
# Number of worker threads dedicated to Netty, defaults to # of CPUs * 2
#tsd.network.worker_threads = 8
# Whether or not to use NIO or tradditional blocking IO, defaults to True
#tsd.network.async_io = true
# ----------- HTTP -----------
# The location of static files for the HTTP GUI interface.
# *** REQUIRED ***
tsd.http.staticroot = /opt/opentsdb-2.4.0/build/staticroot/
# Where TSD should write it's cache files to
# *** REQUIRED ***
tsd.http.cachedir = /opt/opentsdb-2.4.0/build/CACHE
# --------- CORE ----------
# Whether or not to automatically create UIDs for new metric types, default
# is False
tsd.core.auto_create_metrics = true
# --------- STORAGE ----------
# Whether or not to enable data compaction in HBase, default is True
#tsd.storage.enable_compaction = true
# How often, in milliseconds, to flush the data point queue to storage,
# default is 1,000
# tsd.storage.flush_interval = 1000
# Name of the HBase table where data points are stored, default is "tsdb"
tsd.storage.hbase.data_table = tsdb
# Name of the HBase table where UID information is stored, default is "tsdb-uid"
tsd.storage.hbase.uid_table = tsdb-uid
# Path under which the znode for the -ROOT- region is located, default is "/hbase"
tsd.storage.hbase.zk_basedir = /hbase-unsecure
# A comma separated list of Zookeeper hosts to connect to, with or without
# port specifiers, default is "localhost"
#tsd.storage.hbase.zk_quorum = localhost
tsd.storage.hbase.zk_quorum = namenode1.local,namenode2.local
tsd.http.request.enable_chunked = true
tsd.http.request.max_chunk = 16000
tsd.storage.fix_duplicates = true
tsd.storage.max_tags = 45
tsd.storage.uid.width.metric = 4
tsd.storage.uid.width.tagk = 4
tsd.storage.uid.width.tagv = 4
tsd.core.uid.random_metrics = true
tsd.core.auto_create_tagks = true
tsd.core.auto_create_tagvs = true