ElasticsearchItemReader继续读取相同的记录

时间:2019-07-10 18:46:44

标签: spring elasticsearch spring-batch spring-data-elasticsearch

我真的是Spring的初学者,我必须使用spring-batch开发一个应用程序。此应用程序必须从elasticsearch索引中读取并将所有记录写入文件中。

运行程序时,没有任何错误,应用程序读取记录并将其正确写入文件中。问题是应用程序永远不会停止,并且继续读取,处理和写入数据而不会结束。在下图中,您可以看到相同的记录正在处理多次。

enter image description here

我认为在我的代码或软件设计中一定有问题,因此在下面附加了我代码中最重要的部分。

我开发了以下ElasticsearchItemReader:

public class ElasticsearchItemReader<T> extends AbstractPaginatedDataItemReader<T> implements InitializingBean {

private final Logger logger;

private final ElasticsearchOperations elasticsearchOperations;

private final SearchQuery query;

private final Class<? extends T> targetType;

public ElasticsearchItemReader(ElasticsearchOperations elasticsearchOperations, SearchQuery query, Class<? extends T> targetType) {
    setName(getShortName(getClass()));
    logger = getLogger(getClass());
    this.elasticsearchOperations = elasticsearchOperations;
    this.query = query;
    this.targetType = targetType;
}

@Override
public void afterPropertiesSet() throws Exception {
    state(elasticsearchOperations != null, "An ElasticsearchOperations implementation is required.");
    state(query != null, "A query is required.");
    state(targetType != null, "A target type to convert the input into is required.");
}

@Override
@SuppressWarnings("unchecked")
protected Iterator<T> doPageRead() {

    logger.debug("executing query {}", query.getQuery());

    return (Iterator<T>)elasticsearchOperations.queryForList(query, targetType).iterator();
}
}

我还编写了以下ReadWriterConfig:

@Configuration
public class ReadWriterConfig {

@Bean
public ElasticsearchItemReader<AnotherElement> elasticsearchItemReader() {

    return new ElasticsearchItemReader<>(elasticsearchOperations(), query(), AnotherElement.class);
}


@Bean
public SearchQuery query() {

    NativeSearchQueryBuilder builder = new NativeSearchQueryBuilder()
            .withQuery(matchAllQuery());

    return builder.build();
}

@Bean
public ElasticsearchOperations elasticsearchOperations()  {

    Client client = null;
    try {
        Settings settings = Settings.builder()
                .build();

        client = new PreBuiltTransportClient(settings)
                .addTransportAddress(new TransportAddress(InetAddress.getByName("localhost"), 9300));
        return new ElasticsearchTemplate(client);
    } catch (UnknownHostException e) {
        e.printStackTrace();
        return null;
    }


}
}

然后我编写了批处理配置,在其中我称为读取器,写入器和处理器:

@Configuration
@EnableBatchProcessing
public class BatchConfiguration {

@Autowired
public JobBuilderFactory jobBuilderFactory;

@Autowired
public StepBuilderFactory stepBuilderFactory;

// tag::readerwriterprocessor[]
@Bean
public ElasticsearchItemReader<AnotherElement> reader() {
    return  new ReadWriterConfig().elasticsearchItemReader();
}

@Bean
public PersonItemProcessor processor() {
    return new PersonItemProcessor();
}

@Bean
public FlatFileItemWriter itemWriter() {
    return  new FlatFileItemWriterBuilder<AnotherElement>()
            .name("itemWriter")
            .resource(new FileSystemResource("target/output.txt"))
            .lineAggregator(new PassThroughLineAggregator<>())
            .build();
}

// end::readerwriterprocessor[]

// tag::jobstep[]
@Bean
public Job importUserJob(JobCompletionNotificationListener listener, Step stepA) {
    return jobBuilderFactory.get("importUserJob")
            .flow(stepA)
            .end()
            .build();
}



@Bean
public Step stepA(FlatFileItemWriter<AnotherElement> writer) {
    return stepBuilderFactory.get("stepA")
            .<AnotherElement, AnotherElement> chunk(10)
            .reader(reader())
            .processor(processor())
            .writer(itemWriter())
            .build();
}
// end::jobstep[]

}

我附上了我曾经去过的一些网站来编写此代码:

https://github.com/spring-projects/spring-batch-extensions/blob/master/spring-batch-elasticsearch/README.md

https://spring.io/guides/gs/batch-processing/

2 个答案:

答案 0 :(得分:0)

您需要确保项目阅读器在某个时候返回null,以表示没有更多数据可处理和结束作业。

根据评论中的要求,以下是如何导入阅读器的示例:

@Configuration
@org.springframework.context.annotation.Import(ReadWriterConfig.class)
@EnableBatchProcessing
public class BatchConfiguration {

   // other bean definitions

   @Bean
   public Step stepA(ElasticsearchItemReader<AnotherElement> reader, FlatFileItemWriter<AnotherElement> writer) {
      return stepBuilderFactory.get("stepA")
        .<AnotherElement, AnotherElement> chunk(10)
        .reader(reader)
        .processor(processor())
        .writer(writer)
        .build();
   }
}

希望这会有所帮助。

答案 1 :(得分:0)

您的读者应为每次调用Iterator返回一个doPageRead(),以便可以在数据集的一页上进行迭代。由于您没有将Elasticsearch查询的结果拆分为页面,而是一步查询整个集合,因此您将在第一次调用doPageRead()中返回整个结果集的迭代器。然后在下一个调用中,您将再次在完全相同的结果集上返回迭代器。

因此,您必须跟踪是否已经返回了迭代器,例如:

public class ElasticsearchItemReader<T> extends AbstractPaginatedDataItemReader<T> implements InitializingBean {

    // leaving out irrelevant parts

    boolean doPageReadCalled = false;

    @Override
    @SuppressWarnings("unchecked")
    protected Iterator<T> doPageRead() {

        if(doPageReadCalled) {
            return null;
        }

        doPageReadCalled = true

        return (Iterator<T>)elasticsearchOperations.queryForList(query, targetType).iterator();
    }
}

在第一次调用中,将标志设置为true,然后返回迭代器,在下一次调用中,您将看到已经返回数据并返回null

这是一个非常基本的解决方案,具体取决于您从Elasticsearch获得的数据量,使用滚动api查询并返回页面直到处理完所有内容可能会更好。