v Elasticsearch 5.6。*。
我正在寻找一种方法来实现一种机制,通过该机制,我的一个索引(每天会迅速增长约100万个文档)自动管理存储约束。
例如:我将文档的最大数量或最大索引大小定义为变量“ n”。 我要编写一个调度程序来检查'n'是否为真。如果为true,那么我想删除最旧的“ x”文档(基于时间)。
我在这里有几个问题:
显然,我不想删除太多或更少。我怎么知道什么是“ x”?我可以简单地对elasticsearch说:“嘿,删除价值5GB的最旧的文档”-我的目的是简单地释放固定量的存储空间。这可能吗?
第二,我想知道这里的最佳做法是什么?显然,我不想在这里发明一个方形齿轮,如果有什么功能(例如,馆长,我最近才听说过)可以工作,那么我很乐意使用它。
答案 0 :(得分:1)
在您的情况下,最佳做法是使用基于时间的索引(每日,每周或每月的索引),无论哪种方式对于您拥有的数据量和所需的保留时间都是有意义的。您还可以使用Rollover API来决定何时需要创建新索引(基于时间,文档数或索引大小)
删除整个索引比删除符合索引中某些条件的文档要容易得多。如果您选择后者,则文档将被删除,但在合并基础段之前,不会释放空间。而如果您删除整个基于时间的索引,则可以保证释放空间。
答案 1 :(得分:1)
我想出了一个相当简单的bash脚本解决方案,用于在Elasticsearch中清理基于时间的索引,我想与大家分享一下,以防万一有人感兴趣。策展人似乎是执行此操作的标准答案,但我真的不想安装和管理具有所需依赖项的Python应用程序。您不会比通过cron执行的bash脚本简单得多,并且它在核心Linux之外没有任何依赖性。
#!/bin/bash
# Make sure expected arguments were provided
if [ $# -lt 3 ]; then
echo "Invalid number of arguments!"
echo "This script is used to clean time based indices from Elasticsearch. The indices must have a"
echo "trailing date in a format that can be represented by the UNIX date command such as '%Y-%m-%d'."
echo ""
echo "Usage: `basename $0` host_url index_prefix num_days_to_keep [date_format]"
echo "The date_format argument is optional and defaults to '%Y-%m-%d'"
echo "Example: `basename $0` http://localhost:9200 cflogs- 7"
echo "Example: `basename $0` http://localhost:9200 elasticsearch_metrics- 31 %Y.%m.%d"
exit
fi
elasticsearchUrl=$1
indexNamePrefix=$2
numDaysDataToKeep=$3
dateFormat=%Y-%m-%d
if [ $# -ge 4 ]; then
dateFormat=$4
fi
# Get the curent date in a 'seconds since epoch' format
curDateInSecondsSinceEpoch=$(date +%s)
#echo "curDateInSecondsSinceEpoch=$curDateInSecondsSinceEpoch"
# Subtract numDaysDataToKeep from current epoch value to get the last day to keep
let "targetDateInSecondsSinceEpoch=$curDateInSecondsSinceEpoch - ($numDaysDataToKeep * 86400)"
#echo "targetDateInSecondsSinceEpoch=$targetDateInSecondsSinceEpoch"
while : ; do
# Subtract one day from the target date epoch
let "targetDateInSecondsSinceEpoch=$targetDateInSecondsSinceEpoch - 86400"
#echo "targetDateInSecondsSinceEpoch=$targetDateInSecondsSinceEpoch"
# Convert targetDateInSecondsSinceEpoch into a YYYY-MM-DD format
targetDateString=$(date --date="@$targetDateInSecondsSinceEpoch" +$dateFormat)
#echo "targetDateString=$targetDateString"
# Format the index name using the prefix and the calculated date string
indexName="$indexNamePrefix$targetDateString"
#echo "indexName=$indexName"
# First check if an index with this date pattern exists
# Curl options:
# -s silent mode. Don't show progress meter or error messages
# -w "%{http_code}\n" Causes curl to display the HTTP status code only after a completed transfer.
# -I Fetch the HTTP-header only in the response. For HEAD commands there is no body so this keeps curl from waiting on it.
# -o /dev/null Prevents the output in the response from being displayed. This does not apply to the -w output though.
httpCode=$(curl -o /dev/null -s -w "%{http_code}\n" -I -X HEAD "$elasticsearchUrl/$indexName")
#echo "httpCode=$httpCode"
if [ $httpCode -ne 200 ]
then
echo "Index $indexName does not exist. Stopping processing."
break;
fi
# Send the command to Elasticsearch to delete the index. Save the HTTP return code in a variable
httpCode=$(curl -o /dev/null -s -w "%{http_code}\n" -X DELETE $elasticsearchUrl/$indexName)
#echo "httpCode=$httpCode"
if [ $httpCode -eq 200 ]
then
echo "Successfully deleted index $indexName."
else
echo "FAILURE! Delete command failed with return code $httpCode. Continuing processing with next day."
continue;
fi
# Verify the index no longer exists. Should return 404 when the index isn't found.
httpCode=$(curl -o /dev/null -s -w "%{http_code}\n" -I -X HEAD "$elasticsearchUrl/$indexName")
#echo "httpCode=$httpCode"
if [ $httpCode -eq 200 ]
then
echo "FAILURE! Delete command responded successfully, but index still exists. Continuing processing with next day."
continue;
fi
done
答案 2 :(得分:0)
我在https://discuss.elastic.co/t/elasticsearch-efficiently-cleaning-up-the-indices-to-save-space/137019回答了相同的问题
如果索引一直在增长,那么删除文档不是最佳实践。听起来您有时间序列数据。如果为true,那么您想要的是时间序列索引,或者更好的是滚动索引。
5GB的清除量还很小,因为单个Elasticsearch分片可以健康地增长到20GB-50GB。您的存储空间受限吗?您有几个节点?