如何为elasticsearch的repository-hdfs插件配置elasticserch.yml

时间:2016-05-06 02:53:01

标签: elasticsearch elasticsearch-plugin

elasticsearch 2.3.2

repository-hdfs 2.3.1

我将elasticsearch.yml文件配置为elastic official

repositories
   hdfs:
      uri: "hdfs://<host>:<port>/"    # optional - Hadoop file-system URI
      path: "some/path"               # required - path with the file-system where data is stored/loaded
      load_defaults: "true"           # optional - whether to load the default Hadoop configuration (default) or not
      conf_location: "extra-cfg.xml"  # optional - Hadoop              
      configuration XML to be loaded (use commas for multi values)
      conf.<key> : "<value>"          # optional - 'inlined' key=value    added to the Hadoop configuration
      concurrent_streams: 5           # optional - the number of concurrent streams (defaults to 5)
      compress: "false"               # optional - whether to      compress the metadata or not (default)
      chunk_size: "10mb"              # optional - chunk size (disabled by default)

但它引发了异常,格式不正确

错误信息:

Exception in thread "main" SettingsException
 [Failed to load settings from [elasticsearch.yml]]; nested: ScannerException[while scanning a simple key'

 in 'reader', line 99, column 2:
     repositories
     ^
 could not find expected ':'
 in 'reader', line 100, column 10:
     hdfs:
         ^];   
Likely root cause: while scanning a simple key
in 'reader', line 99, column 2:
 repositories
 ^
could not find expected ':'
in 'reader', line 100, column 10:
     hdfs:

我将其编辑为:

   repositories:
       hdfs:
         uri: "hdfs://191.168.4.220:9600/"

但它不起作用

我想知道格式是什么。

我找到了elasticsearch.xml的aws configure

cloud:
    aws:
        access_key: AKVAIQBF2RECL7FJWGJQ
        secret_key: vExyMThREXeRMm/b/LRzEB8jWwvzQeXgjqMX+6br

repositories:
    s3:
        bucket: "bucket_name"
        region: "us-west-2"
        private-bucket:
            bucket: <bucket not accessible by default key>
            access_key: <access key>
            secret_key: <secret key>
        remote-bucket:
            bucket: <bucket in other region>
            region: <region>
    external-bucket:
        bucket: <bucket>
        access_key: <access key>
        secret_key: <secret key>
        endpoint: <endpoint>
        protocol: <protocol>

我模仿它,但仍然无效

1 个答案:

答案 0 :(得分:1)

我尝试在elasticsearch 2.3.2中安装repository-hdfs 2.3.1,但失败了:

ERROR: Plugin [repository-hdfs] is incompatible with Elasticsearch [2.3.2]. Was designed for version [2.3.1]

该插件只能安装在elasticsearch 2.3.1。

您应该指定uri,path,conf_location选项,并可能删除conf.key选项。以下面的配置为例。

security.manager.enabled: false
repositories.hdfs:
    uri: "hdfs://master:9000"       # optional - Hadoop file-system URI
    path: "/aaa/bbb"                # required - path with the file-system where data is stored/loaded
    load_defaults: "true"           # optional - whether to load the default Hadoop configuration (default) or not
    conf_location: "/home/ec2-user/app/hadoop-2.6.3/etc/hadoop/core-site.xml,/home/ec2-user/app/hadoop-2.6.3/etc/hadoop/hdfs-site.xml"  # optional - Hadoop configuration XML to be loaded (use commas for multi values)
    concurrent_streams: 5           # optional - the number of concurrent streams (defaults to 5)
    compress: "false"               # optional - whether to compress the metadata or not (default)
    chunk_size: "10mb"              # optional - chunk size (disabled by default)

我成功开始了:

[----@----------- elasticsearch-2.3.1]$ bin/elasticsearch
[2016-05-06 04:40:58,173][INFO ][node                     ] [Protector]     version[2.3.1], pid[17641], build[bd98092/2016-04-04T12:25:05Z]
[2016-05-06 04:40:58,174][INFO ][node                     ] [Protector]     initializing ...
[2016-05-06 04:40:58,830][INFO ][plugins                  ] [Protector] modules [reindex, lang-expression, lang-groovy], plugins [repository-hdfs], sites []
[2016-05-06 04:40:58,863][INFO ][env                      ] [Protector] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [8gb], net total_space [9.9gb], spins? [unknown], types [rootfs]
[2016-05-06 04:40:58,863][INFO ][env                      ] [Protector] heap size [1007.3mb], compressed ordinary object pointers [true]
[2016-05-06 04:40:58,863][WARN ][env                      ] [Protector] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-05-06 04:40:59,192][INFO ][plugin.hadoop.hdfs       ] Loaded Hadoop     [1.2.1] libraries from file:/home/ec2-user/app/elasticsearch-2.3.1/plugins/repository-hdfs/
[2016-05-06 04:41:01,598][INFO ][node                     ] [Protector] initialized
[2016-05-06 04:41:01,598][INFO ][node                     ] [Protector] starting ...
[2016-05-06 04:41:01,823][INFO ][transport                ] [Protector] publish_address {xxxxxxxxx:9300}, bound_addresses {xxxxxxx:9300}
[2016-05-06 04:41:01,830][INFO ][discovery                ] [Protector] hdfs/9H8wli0oR3-Zp-M9ZFhNUQ
[2016-05-06 04:41:04,886][INFO ][cluster.service          ] [Protector] new_master {Protector}{9H8wli0oR3-Zp-M9ZFhNUQ}{xxxxxxx}{xxxxx:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-05-06 04:41:04,908][INFO ][http                     ] [Protector] publish_address {xxxxxxxxx:9200}, bound_addresses {xxxxxxx:9200}
[2016-05-06 04:41:04,908][INFO ][node                     ] [Protector] started
[2016-05-06 04:41:05,415][INFO ][gateway                  ] [Protector] recovered [1] indices into cluster_state
[2016-05-06 04:41:06,097][INFO ][cluster.routing.allocation] [Protector] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[website][0], [website][0]] ...]).

但是,当我尝试创建快照时:

PUT /_snapshot/my_backup
{
  "type": "hdfs",
  "settings": {
        "path":"/aaa/bbb/"
  }
}

我收到以下错误:

Caused by: java.io.IOException: Mkdirs failed to create file:/aaa/bbb/tests-zTkKRtoZTLu3m3RLascc1w