春季批处理作业无限运行

时间:2020-02-14 07:55:43

标签: spring spring-batch

我正在尝试无限运行spring批处理作业。主要动机是不允许弹簧批闲置

我正在使用以下代码无限运行作业

private JobExecution execution = null;

@Scheduled(cron = "0 */2 * * * ?")
public void perform() throws JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException, JobParametersInvalidException {
    System.out.println("=== STATUS STARTED ====");

    if (execution != null && execution.isRunning()) {
        System.out.println("Job is running. Please wait.");
        return;
    }

    JobParameters jobParameters = new JobParametersBuilder().addString("JobId", String.valueOf(System.currentTimeMillis())).addDate("date", new Date()).addLong("time", System.currentTimeMillis()).toJobParameters();

    execution = jobLauncher.run(job, jobParameters);

    if (!execution.getStatus().isRunning()) {
        perform();
    }

    System.out.println("STATUS :: " + execution.getStatus());
}

首先,我们正在检查JOB是否正在运行。如果未运行,则再次重新运行相同的方法。现在作业正在无限运行。

我的问题是这种方法是好是坏?或其他解决方案?

I have another query. If data is not available then break the infinite loop
How to break that loop..

仅供参考,以下是批处理配置代码

@Configuration
public class JobConfiguration {

    @Autowired
    private JobBuilderFactory jobBuilderFactory;

    @Autowired
    private StepBuilderFactory stepBuilderFactory;

    @Autowired
    private DataSource dataSource;

    private Resource outputResource = new FileSystemResource("path\\output.csv");
    private Resource inputResource = new FileSystemResource("path\\input.csv");

    @Bean
    public ColumnRangePartitioner partitioner() {
        ColumnRangePartitioner columnRangePartitioner = new ColumnRangePartitioner();
        columnRangePartitioner.setColumn("id");
        columnRangePartitioner.setDataSource(dataSource);
        columnRangePartitioner.setTable("customer");
        return columnRangePartitioner;
    }

    @Bean
    @StepScope
    public FlatFileItemReader<Customer> pagingItemReader(@Value("#{stepExecutionContext['minValue']}") Long minValue, @Value("#{stepExecutionContext['maxValue']}") Long maxValue) {
        System.out.println("reading " + minValue + " to " + maxValue);
        // Create reader instance
        FlatFileItemReader<Customer> reader = new FlatFileItemReader<>();

        // Set input file location
        reader.setResource(inputResource);

        // Set number of lines to skips. Use it if file has header rows.
        reader.setLinesToSkip(1);

        // Configure how each line will be parsed and mapped to different values
        reader.setLineMapper(new DefaultLineMapper() {
            {
                // 3 columns in each row
                setLineTokenizer(new DelimitedLineTokenizer() {
                    {
                        setNames(new String[] { "id", "firstName", "lastName" });
                    }
                });
                // Set values in Employee class
                setFieldSetMapper(new BeanWrapperFieldSetMapper<Customer>() {
                    {
                        setTargetType(Customer.class);
                    }
                });
            }
        });

        return reader;
    }

    @Bean
    @StepScope
    public FlatFileItemWriter<Customer> customerItemWriter() {
        // Create writer instance
        FlatFileItemWriter<Customer> writer = new FlatFileItemWriter<>();

        // Set output file location
        writer.setResource(outputResource);

        // All job repetitions should "append" to same output file
        writer.setAppendAllowed(true);

        // Name field values sequence based on object properties
        writer.setLineAggregator(new DelimitedLineAggregator<Customer>() {
            {
                setDelimiter(",");
                setFieldExtractor(new BeanWrapperFieldExtractor<Customer>() {
                    {
                        setNames(new String[] { "id", "firstName", "lastName" });
                    }
                });
            }
        });
        return writer;
    }

    // Master
    @Bean
    public Step step1() {
        return stepBuilderFactory.get("step1").partitioner(slaveStep().getName(), partitioner()).step(slaveStep()).gridSize(12).taskExecutor(new SimpleAsyncTaskExecutor()).build();
    }

    // slave step
    @Bean
    public Step slaveStep() {
        return stepBuilderFactory.get("slaveStep").<Customer, Customer>chunk(1000).reader(pagingItemReader(null, null)).writer(customerItemWriter()).build();
    }

    @Bean
    public Job job() {
        return jobBuilderFactory.get("job").start(step1()).build();
    }
}

另一种连续运行步骤的方法

<job id="notificationBatchJobProcess"
        xmlns="http://www.springframework.org/schema/batch"
        job-repository="jobRepository">

        <step id="startLogStep" next="execute">
            <tasklet ref="ref1" />
        </step>

        <step id="execute">
            <batch:tasklet ref="ref2" />
            <batch:next on="COMPLETED" to="endLogStep" />
        </step>

        <step id="endLogStep">
            <batch:tasklet ref="ref3" />
            <batch:next on="COMPLETED" to="startLogStep" />
        </step>
    </job>

endLogStep任务完成后,我尝试使用上述代码实现的目标然后再次调用startLogStep。 并且此过程将持续无限时间,直到或除非发生任何异常。

这是运行这些作业的正确方法吗?

0 个答案:

没有答案