使用Elasticsearch File System Crawler将{pdf}文件索引到AWS Elasticsearch服务

时间:2018-05-21 14:26:14

标签: elasticsearch aws-elasticsearch

我可以使用Elasticsearch File System Crawler将pdf文件索引到本地Elasticsearch。默认的fscrawler设置包含port,host和scheme参数,如下所示。

{
"name" : "job_name2",
"fs" : {
"url" : "/tmp/es",
"update_rate" : "15m",
"excludes" : [ "~*" ],
"json_support" : false,
"filename_as_id" : false,
"add_filesize" : true,
"remove_deleted" : true,
"add_as_inner_object" : false,
"store_source" : false,
"index_content" : true,
"attributes_support" : false,
"raw_metadata" : true,
"xml_support" : false,
"index_folders" : true,
"lang_detect" : false,
"continue_on_error" : false,
"pdf_ocr" : true,
"ocr" : {
  "language" : "eng"
}
},
 "elasticsearch" : {
"nodes" : [ {
  "host" : "127.0.0.1",
  "port" : 9200,
  "scheme" : "HTTP"
} ],
"bulk_size" : 100,
"flush_interval" : "5s"
},
"rest" : {
"scheme" : "HTTP",
"host" : "127.0.0.1",
"port" : 8080,
"endpoint" : "fscrawler"
}
}

但是,我很难使用它来索引AWS elasticsearch服务,因为要索引到AWS elasticsearch,我必须按照文档here提供AWS_ACCESS_KEY,AWS_SECRET_KEY,区域和服务。有关如何索引的任何帮助pdf文件到AWS elasticsearch服务非常感谢。

1 个答案:

答案 0 :(得分:0)

FSCrawler不支持尚未提供AWS_ACCESS_KEYAWS_SECRET_KEY。 也许值得在FSCrawler中打开一个问题?

也就是说,支持https://cloud.elastic.co,但使用的是“登录名/密码”。 See documentation for more details