我正在为我的查询编制索引,如下所示:
client.Index(new PercolatedQuery
{
Id = "std_query",
Query = new QueryContainer(new MatchQuery
{
Field = Infer.Field<LogEntryModel>(entry => entry.Message),
Query = "just a text"
})
}, d => d.Index(EsIndex));
client.Refresh(EsIndex);
现在,如何使用ES的过滤器功能将传入的文档与此查询进行匹配?说这个领域缺乏NEST文件将是一个巨大的轻描淡写。我尝试使用client.Percolate
调用,但它现在已被弃用,他们建议使用搜索API,但不要告诉如何将其与过滤器一起使用...
我正在使用 ES v5 和相同版本的NEST库。
答案 0 :(得分:9)
一旦GA发布,有计划{5.}}为5.x;我知道文档可以在许多地方更清晰,并且在这方面的任何帮助将是最受欢迎的:)
improve the documentation由The documentation for the Percolate query生成。在此处提取所有部分,the integration test for it。首先,让我们定义POCO模型
public class LogEntryModel
{
public string Message { get; set; }
public DateTimeOffset Timestamp { get; set; }
}
public class PercolatedQuery
{
public string Id { get; set; }
public QueryContainer Query { get; set; }
}
我们将流畅地映射所有属性,而不是使用映射属性。流畅的映射是最强大的,可以表达在Elasticsearch中映射的所有方法。
现在,创建连接设置和客户端以使用Elasticsearch。
var pool = new SingleNodeConnectionPool(new Uri($"http://localhost:9200"));
var logIndex = "log_entries";
var connectionSettings = new ConnectionSettings(pool)
// infer mapping for logs
.InferMappingFor<LogEntryModel>(m => m
.IndexName(logIndex)
.TypeName("log_entry")
)
// infer mapping for percolated queries
.InferMappingFor<PercolatedQuery>(m => m
.IndexName(logIndex)
.TypeName("percolated_query")
);
var client = new ElasticClient(connectionSettings);
我们可以指定索引名称和类型名称来推断我们的POCO;也就是说,当NEST使用LogEntryModel
或PercolatedQuery
作为请求中的泛型类型参数(例如T
中的.Search<T>()
)发出请求时,它将使用推断的索引如果请求中未指定名称和类型名称,则为
现在,删除索引以便我们可以从头开始
// delete the index if it already exists
if (client.IndexExists(logIndex).Exists)
client.DeleteIndex(logIndex);
并创建索引
client.CreateIndex(logIndex, c => c
.Settings(s => s
.NumberOfShards(1)
.NumberOfReplicas(0)
)
.Mappings(m => m
.Map<LogEntryModel>(mm => mm
.AutoMap()
)
.Map<PercolatedQuery>(mm => mm
.AutoMap()
.Properties(p => p
// map the query field as a percolator type
.Percolator(pp => pp
.Name(n => n.Query)
)
)
)
)
);
Query
上的PercolatedQuery
属性被映射为percolator
类型。这是Elasticsearch 5.0中的新功能。映射请求看起来像
{
"settings": {
"index.number_of_replicas": 0,
"index.number_of_shards": 1
},
"mappings": {
"log_entry": {
"properties": {
"message": {
"fields": {
"keyword": {
"type": "keyword"
}
},
"type": "text"
},
"timestamp": {
"type": "date"
}
}
},
"percolated_query": {
"properties": {
"id": {
"fields": {
"keyword": {
"type": "keyword"
}
},
"type": "text"
},
"query": {
"type": "percolator"
}
}
}
}
}
现在,我们已准备好为查询编制索引
client.Index(new PercolatedQuery
{
Id = "std_query",
Query = new MatchQuery
{
Field = Infer.Field<LogEntryModel>(entry => entry.Message),
Query = "just a text"
}
}, d => d.Index(logIndex).Refresh(Refresh.WaitFor));
对索引编制索引,让我们渗透文档
var logEntry = new LogEntryModel
{
Timestamp = DateTimeOffset.UtcNow,
Message = "some log message text"
};
// run percolator on the logEntry instance
var searchResponse = client.Search<PercolatedQuery>(s => s
.Query(q => q
.Percolate(p => p
// field that contains the query
.Field(f => f.Query)
// details about the document to run the stored query against.
// NOTE: This does not index the document, only runs percolation
.DocumentType<LogEntryModel>()
.Document(logEntry)
)
)
);
// outputs 1
Console.WriteLine(searchResponse.Documents.Count());
ID为"std_query"
的过滤查询会返回searchResponse.Documents
{
"took" : 117,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"hits" : {
"total" : 1,
"max_score" : 0.2876821,
"hits" : [
{
"_index" : "log_entries",
"_type" : "percolated_query",
"_id" : "std_query",
"_score" : 0.2876821,
"_source" : {
"id" : "std_query",
"query" : {
"match" : {
"message" : {
"query" : "just a text"
}
}
}
}
}
]
}
}
这是渗透文档实例的示例。也可以针对已编入索引的文档运行渗透
var searchResponse = client.Search<PercolatedQuery>(s => s
.Query(q => q
.Percolate(p => p
// field that contains the query
.Field(f => f.Query)
// percolate an already indexed log entry
.DocumentType<LogEntryModel>()
.Id("log entry id")
.Index<LogEntryModel>()
.Type<LogEntryModel>()
)
)
);