当我阅读filter时,会看到以下信息
“对于在2017年末之后创建的S3高密度服务,已删除了每个分区2亿个文档,但每个索引限制为100万个文档。”
我想确认S3 HD索引是否仍然有100万个文档限制,或者该限制最近是否也已取消?
答案 0 :(得分:0)
我遇到了同样的问题,然后将数据转换为批处理。
请根据您的要求修改代码。
for (int i = 0; i < result.Items.Count; i = i + 31500)
{
searchItems = result.Items.Skip(i).Take(31500);
actionList = new List<IndexAction<AzureSearchItem>>();
foreach (var item in searchItems)
{
actionList.Add(IndexAction.MergeOrUpload(AzureHelper.FormatSearchItem(item)));
}
PostBulkAssortmentDocuments(actionList.AsEnumerable());
}
public virtual void PostBulkAssortmentDocuments(IEnumerable<IndexAction<AzureSearchItem>> actions)
{
if (actions.Count() == 0)
return;
var batch = IndexBatch.New(actions);
try
{
var data = GetIndexClient(IndexName).Documents.Index(batch);
var passResultCount = data.Results.Where(x => x.Succeeded).Count();
var failResultCount = data.Results.Where(x => x.Succeeded == false).Count();
var MessageResult = data.Results.Where(x => !string.IsNullOrEmpty(x.ErrorMessage));
var keyResult = data.Results.Where(x => !string.IsNullOrEmpty(x.Key)).Select(x => x.Key).ToList();
var unikKey = keyResult.Distinct().ToList();
string json = Newtonsoft.Json.JsonConvert.SerializeObject(data);
}
catch (IndexBatchException e)
{
// Sometimes when your Search service is under load, indexing will fail for some of the documents in
// the batch. Depending on your application, you can take compensating actions like delaying and
// retrying. For this simple demo, we just log the failed document keys and continue.
Console.WriteLine(
"Failed to index some of the documents: {0}",
String.Join(", ", e.IndexingResults.Where(r => !r.Succeeded).Select(r => r.Key)));
this.WriteToFile("Error - PostBulkAssortmentDocuments -" + e.Message);
}
}
答案 1 :(得分:0)
文章内容仍然有效。因此,S3 HD服务层中搜索索引的上百万文档上限现在仍然有效。