使用100k
从一个索引中提取超过searchScroll
个文档,并在所有100K
文档中添加一个字段。然后再次将这些文档插入另一个新索引中。
我还使用SearchScroll
api设置了大小searchSourceBuilder.size(100)
,但我将大小增加到了searchSourceBuilder.size(1000)
。在这两种情况下,在处理18100
个文档(当searchSourceBuilder.size(100)时)和21098
个文档(当searchSourceBuilder.size(1000)时)后,都出现以下错误。
search_context_missing_exception","reason":"No search context found for id
而且,错误会引发在这一行searchResponse = SearchEngineClient.getInstance().searchScroll(scrollRequest);
请找到我完整的错误堆栈
Exception in thread "main" ElasticsearchStatusException[Elasticsearch exception
[type=search_phase_execution_exception, reason=all shards failed]]; nested: Elas
ticsearchException[Elasticsearch exception [type=search_context_missing_exceptio
n, reason=No search context found for id [388]]];
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestR
esponse.java:177)
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLeve
lClient.java:573)
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(R
estHighLevelClient.java:549)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighL
evelClient.java:456)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEn
tity(RestHighLevelClient.java:429)
at org.elasticsearch.client.RestHighLevelClient.searchScroll(RestHighLev
elClient.java:387)
at com.es.utility.DocumentIndex.main(DocumentIndex.java:101)
Suppressed: org.elasticsearch.client.ResponseException: method [GET], ho
st [http://localhost:9200], URI [/_search/scroll], status line [HTTP/1.1 404 Not
Found]
{"error":{"root_cause":[{"type":"search_context_missing_exception","reason":"No
search context found for id [390]"},{"type":"search_context_missing_exception","
reason":"No search context found for id [389]"},{"type":"search_context_missing_
exception","reason":"No search context found for id [392]"},{"type":"search_cont
ext_missing_exception","reason":"No search context found for id [391]"},{"type":
"search_context_missing_exception","reason":"No search context found for id [388
]"}],"type":"search_phase_execution_exception","reason":"all shards failed","pha
se":"query","grouped":true,"failed_shards":[{"shard":-1,"index":null,"reason":{"
type":"search_context_missing_exception","reason":"No search context found for i
d [390]"}},{"shard":-1,"index":null,"reason":{"type":"search_context_missing_exc
eption","reason":"No search context found for id [389]"}},{"shard":-1,"index":nu
ll,"reason":{"type":"search_context_missing_exception","reason":"No search conte
xt found for id [392]"}},{"shard":-1,"index":null,"reason":{"type":"search_conte
xt_missing_exception","reason":"No search context found for id [391]"}},{"shard"
:-1,"index":null,"reason":{"type":"search_context_missing_exception","reason":"N
o search context found for id [388]"}}],"caused_by":{"type":"search_context_miss
ing_exception","reason":"No search context found for id [388]"}},"status":404}
at org.elasticsearch.client.RestClient$1.completed(RestClient.ja
va:357)
at org.elasticsearch.client.RestClient$1.completed(RestClient.ja
va:346)
at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.
java:119)
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerI
mpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:177)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.process
Response(HttpAsyncRequestExecutor.java:436)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputRe
ady(HttpAsyncRequestExecutor.java:326)
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consume
Input(DefaultNHttpClientConnection.java:265)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputRea
dy(InternalIODispatch.java:81)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputRea
dy(InternalIODispatch.java:39)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputRead
y(AbstractIODispatch.java:114)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseI
OReactor.java:162)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEve
nt(AbstractIOReactor.java:337)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEve
nts(AbstractIOReactor.java:315)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(Ab
stractIOReactor.java:276)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIO
Reactor.java:104)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor
$Worker.run(AbstractMultiworkerIOReactor.java:588)
at java.lang.Thread.run(Unknown Source)
Caused by: ElasticsearchException[Elasticsearch exception [type=search_context_m
issing_exception, reason=No search context found for id [388]]]
at org.elasticsearch.ElasticsearchException.innerFromXContent(Elasticsea
rchException.java:490)
at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchEx
ception.java:406)
at org.elasticsearch.ElasticsearchException.innerFromXContent(Elasticsea
rchException.java:435)
at org.elasticsearch.ElasticsearchException.failureFromXContent(Elastics
earchException.java:594)
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestR
esponse.java:169)
... 6 more
请找到我的Java代码:
public class DocumentIndex {
private final static String INDEX = "documents";
private final static String ATTACHMENT = "document_attachment";
private final static String TYPE = "doc";
private static final Logger logger = Logger.getLogger(Thread.currentThread().getStackTrace()[0].getClassName());
public static void main(String args[]) throws IOException {
RestHighLevelClient restHighLevelClient = null;
Document doc=new Document();
logger.info("Started Indexing the Document.....");
try {
restHighLevelClient = new RestHighLevelClient(RestClient.builder(new HttpHost("localhost", 9200, "http"),
new HttpHost("localhost", 9201, "http")));
} catch (Exception e) {
System.out.println(e.getMessage());
}
//Fetching Id, FilePath & FileName from Document Index.
SearchRequest searchRequest = new SearchRequest(INDEX);
searchRequest.types(TYPE);
final Scroll scroll = new Scroll(TimeValue.timeValueMinutes(1L)); //part of Scroll API
searchRequest.scroll(scroll); //part of Scroll API
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
QueryBuilder qb = QueryBuilders.matchAllQuery();
searchSourceBuilder.query(qb);
searchSourceBuilder.size(100);
searchRequest.source(searchSourceBuilder);
SearchResponse searchResponse = SearchEngineClient.getInstance().search(searchRequest);
String scrollId = searchResponse.getScrollId(); //part of Scroll API
SearchHit[] searchHits = searchResponse.getHits().getHits();
long totalHits=searchResponse.getHits().totalHits;
logger.info("Total Hits --->"+totalHits);
//part of Scroll API -- Starts
while (searchHits != null && searchHits.length > 0) {
SearchScrollRequest scrollRequest = new SearchScrollRequest(scrollId);
scrollRequest.scroll(scroll);
searchResponse = SearchEngineClient.getInstance().searchScroll(scrollRequest);
scrollId = searchResponse.getScrollId();
searchHits = searchResponse.getHits().getHits();
File all_files_path = new File("d:\\All_Files_Path.txt");
File available_files = new File("d:\\Available_Files.txt");
File missing_files = new File("d:\\Missing_Files.txt");
int totalFilePath=1;
int totalAvailableFile=1;
int missingFilecount=1;
Map<String, Object> jsonMap ;
for (SearchHit hit : searchHits) {
String encodedfile = null;
File file=null;
Map<String, Object> sourceAsMap = hit.getSourceAsMap();
if(sourceAsMap != null) {
doc.setId((int) sourceAsMap.get("id"));
doc.setApp_language(String.valueOf(sourceAsMap.get("app_language")));
}
String filepath=doc.getPath().concat(doc.getFilename());
logger.info("ID---> "+doc.getId()+"File Path --->"+filepath);
try(PrintWriter out = new PrintWriter(new FileOutputStream(all_files_path, true)) ){
out.println("FilePath Count ---"+totalFilePath+":::::::ID---> "+doc.getId()+"File Path --->"+filepath);
}
file = new File(filepath);
if(file.exists() && !file.isDirectory()) {
try {
try(PrintWriter out = new PrintWriter(new FileOutputStream(available_files, true)) ){
out.println("Available File Count --->"+totalAvailableFile+":::::::ID---> "+doc.getId()+"File Path --->"+filepath);
totalAvailableFile++;
}
FileInputStream fileInputStreamReader = new FileInputStream(file);
byte[] bytes = new byte[(int) file.length()];
fileInputStreamReader.read(bytes);
encodedfile = new String(Base64.getEncoder().encodeToString(bytes));
fileInputStreamReader.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
else
{
System.out.println("Else block");
PrintWriter out = new PrintWriter(new FileOutputStream(missing_files, true));
out.println("Available File Count --->"+missingFilecount+":::::::ID---> "+doc.getId()+"File Path --->"+filepath);
out.close();
missingFilecount++;
}
jsonMap = new HashMap<>();
jsonMap.put("id", doc.getId());
jsonMap.put("app_language", doc.getApp_language());
jsonMap.put("fileContent", encodedfile);
String id=Long.toString(doc.getId());
IndexRequest request = new IndexRequest(ATTACHMENT, "doc", id )
.source(jsonMap)
.setPipeline(ATTACHMENT);
PrintStream printStream = new PrintStream(new File("d:\\exception.txt"));
try {
IndexResponse response = SearchEngineClient.getInstance2().index(request);
} catch(ElasticsearchException e) {
if (e.status() == RestStatus.CONFLICT) {
}
e.printStackTrace(printStream);
}
totalFilePath++;
}
}
ClearScrollRequest clearScrollRequest = new ClearScrollRequest();
clearScrollRequest.addScrollId(scrollId);
ClearScrollResponse clearScrollResponse = restHighLevelClient.clearScroll(clearScrollRequest);
boolean succeeded = clearScrollResponse.isSucceeded();
////part of Scroll API -- Ends
logger.info("Indexing done.....");
}
}
使用ES 6.2.3版本的美国
答案 0 :(得分:0)
您会收到此错误,因为在获取和处理所有结果之前,搜索上下文已死,因此,为了解决此问题,您应该使搜索上下文保持更长的生命周期。请参阅Keeping the search context alive。
增加滚动的时间值。
{{1}}
将new_value增大到适合您的要求。