我使用java配置在spring批处理中设置了一个简单的读取工作,我正在尝试编写一个简单的监听器。监听器应显示读取一定量记录所花费的时间(以秒为单位)。
bean看起来如下:
@Bean
public SimpleItemReaderListener listener(){
SimpleItemReaderListener listener = new SimpleItemReaderListener<>();
listener.setLogInterval(50000);
return listener;
}
根据设置的日志间隔,将显示一条消息,消息将如下所示:
14:42:11,445 INFO main SimpleItemReaderListener:45 - Read records [0] to [50.000] in average 1,30 seconds
14:42:14,453 INFO main SimpleItemReaderListener:45 - Read records [50.000] to [100.000] in average 2,47 seconds
14:42:15,489 INFO main SimpleItemReaderListener:45 - Read records [100.000] to [150.000] in average 1,03 seconds
14:42:16,448 INFO main SimpleItemReaderListener:45 - Read records [150.000] to [200.000] in average 0,44 seconds
正是我想要的,完美。但是,当我将batchConfiguration中的块从100.000更改为让我们说1.000时,日志记录会发生变化而我不知道导致更改的原因...
14:51:24,893 INFO main SimpleItemReaderListener:45 - Read records [0] to [50.000] in average 0,90 seconds
14:51:50,657 INFO main SimpleItemReaderListener:45 - Read records [50.000] to [100.000] in average 0,57 seconds
14:52:16,392 INFO main SimpleItemReaderListener:45 - Read records [100.000] to [150.000] in average 0,59 seconds
14:52:42,125 INFO main SimpleItemReaderListener:45 - Read records [150.000] to [200.000] in average 0,61 seconds
由于印象中ItemReaderListener中的beforeRead和afterRead方法将针对每个单独的项执行,我期望每个50.000花费的时间更多地与slf4j日志显示的时间一致(例如,每个50.000秒26秒。
当我更改块大小时,我的侦听器的哪个部分会导致这种不受欢迎的行为?
我对ItemReadListener的实现如下:
public class SimpleItemReaderListener<Item> implements ItemReadListener<Item>{
private static final Logger LOG = LoggerFactory.getLogger(SimpleItemReaderListener.class);
private static final double NANO_TO_SECOND_DIVIDER_NUMBER = 1_000_000_000.0;
private static final String PATTERN = ",###";
private int startCount = 0;
private int logInterval = 50000;
private int currentCount;
private int totalCount;
private long timeElapsed;
private long startTime;
private DecimalFormat decimalFormat = new DecimalFormat(PATTERN);
@Override
public void beforeRead() {
startTime = System.nanoTime();
}
@Override
public void afterRead(Item item) {
updateTimeElapsed();
if (currentCount == logInterval) {
displayMessage();
updateStartCount();
resetCount();
} else {
increaseCount();
}
}
private void updateTimeElapsed() {
timeElapsed += System.nanoTime() - startTime;
}
private void displayMessage() {
LOG.info(String.format("Read records [%s] to [%s] in average %.2f seconds",
decimalFormat.format(startCount),
decimalFormat.format(totalCount),
timeElapsed / NANO_TO_SECOND_DIVIDER_NUMBER));
}
private void updateStartCount() {
startCount += currentCount;
}
private void resetCount() {
currentCount = 0;
timeElapsed = 0;
}
private void increaseCount() {
currentCount++;
totalCount++;
}
@Override
public void onReadError(Exception arg0) {
// NO-OP
}
public void setLogInterval(int logInterval){
this.logInterval = logInterval;
}
}
完整的批处理配置类:
@Configuration
@EnableBatchProcessing
public class BatchConfiguration {
@Autowired
public JobBuilderFactory jobBuilderFactory;
@Autowired
public StepBuilderFactory stepBuilderFactory;
@Bean
public Job importUserJob() {
return jobBuilderFactory.get("importUserJob")
.flow(validateInput())
.end()
.build();
}
@Bean
public Step validateInput() {
return stepBuilderFactory.get("validateInput")
.chunk(1000)
.reader(reader())
.listener(listener())
.writer(writer())
.build();
}
@Bean
public HeaderTokenizer tokenizeHeader(){
HeaderTokenizer tokenizer = new HeaderTokenizer();
//optional setting, custom delimiter is set to ','
//tokenizer.setDelimiter(",");
return tokenizer;
}
@Bean
public SimpleItemReaderListener listener(){
SimpleItemReaderListener listener = new SimpleItemReaderListener<>();
//optional setting, custom logging is set to 1000, increase for less verbose logging
listener.setLogInterval(50000);
return listener;
}
@Bean
public FlatFileItemReader reader() {
FlatFileItemReader reader = new FlatFileItemReader();
reader.setLinesToSkip(1);
reader.setSkippedLinesCallback(tokenizeHeader());
reader.setResource(new ClassPathResource("majestic_million.csv"));
reader.setLineMapper(new DefaultLineMapper() {{
setLineTokenizer(tokenizeHeader());
setFieldSetMapper(new PassThroughFieldSetMapper());
}});
return reader;
}
@Bean
public DummyItemWriter writer(){
DummyItemWriter writer = new DummyItemWriter();
return writer;
}
}
或者使用http://projects.spring.io/spring-batch/中的spring boot示例并添加SimpleItemReaderListener bean。
答案 0 :(得分:1)
当批量较小时,您的应用程序会在阅读器外花费更多时间。您的计时代码仅测量在阅读器中花费的时间,但日志记录框架显示时间戳,这是时间总计。