我是Elasticsearch的新手并且一直在尝试使用ingest插件(我已经发布了几个问题)。有人建议我应该使用Fscrawler。我使用Elasticsearch 5.5.1并且已经安装了Fscrawler 2.3。我安装了java 8.0.1并创建了一个environement变量' JAVA_HOME'指向Java目录。使用Kibana我创建了以下内容:
PUT _ingest/pipeline/docs
{
"description": "documents",
"processors" : [
{
"attachment" : {
"field": "data",
"indexed_chars" : -1
}
}]
}
PUT myindex
{
"mappings" : {
"documents" : {
"properties" : {
"attachment.data" : {
"type": "text",
"analyzer": "standard"
}
}
}
}
}
在我的Fscrawler的_settings文件中我已经将url设置为我的文档文件夹,并且在我已包含的elaasticsearch部分"index" : "myindex"
使用powershell命令.\fscrawler mydocs --loop 1
以下是命令的输出。
这是我的fscrawler
的_settings.json文件{
"name" : "docs",
"fs" : {
"url" : "w:\\Elasticsearch\\Docs",
"update_rate" : "15m",
"excludes" : [ "~*" ],
"json_support" : false,
"filename_as_id" : false,
"add_filesize" : true,
"remove_deleted" : true,
"add_as_inner_object" : false,
"store_source" : false,
"index_content" : true,
"attributes_support" : false,
"raw_metadata" : true,
"xml_support" : false,
"index_folders" : true,
"lang_detect" : false,
"continue_on_error" : false,
"pdf_ocr" : true
},
"elasticsearch" : {
"nodes" : [ {
"host" : "127.0.0.1",
"port" : 9200,
"scheme" : "HTTP"
} ],
"index" : "myindex",
"bulk_size" : 100,
"flush_interval" : "5s",
"username" : "elastic",
"password" : "changeme"
},
"rest" : {
"scheme" : "HTTP",
"host" : "127.0.0.1",
"port" : 8080,
"endpoint" : "fscrawler"
}
}
答案 0 :(得分:0)
最好不要包含屏幕截图,而是复制并粘贴日志。
然后: