Spring批处理:组装作业而不是配置它(可扩展的作业配置)

时间:2016-02-24 08:56:42

标签: spring file annotations spring-batch

背景

我正在设计一个文件阅读层,可以读取分隔文件并将其加载到List中。我决定使用Spring Batch,因为它提供了许多可伸缩性选项,我可以根据它们的大小来利用不同的文件集。

要求

  1. 我想设计一个通用的Job API,可用于读取任何分隔文件。
  2. 应该有一个Job结构应该用于解析每个分隔文件。例如,如果系统需要读取5个文件,则将有5个作业(每个文件一个)。 5个作业彼此不同的唯一方法是它们将使用不同的FieldSetMapper列名称,目录路径和其他缩放参数,例如commit-intervalthrottle-limit
  3. 此API的用户不需要配置Spring 在系统中引入新文件类型时,他自己进行批处理作业,步骤,分块,分区等。
  4. 用户需要做的就是提供作业使用的FieldsetMapper以及commit-intervalthrottle-limit以及放置每种类型文件的目录
  5. 每个文件将有一个预定义目录。每个目录可以包含多个相同类型和格式的文件。将使用MultiResourcePartioner查看目录内部。分区数=目录中的文件数。
  6. 我的要求是构建一个Spring Batch基础架构,这个基础架构为我提供了一项独特的工作,一旦我有了可以弥补工作的点点滴滴,我就可以启动它。

    我的解决方案:

    我创建了一个抽象配置类,它将由具体的配置类进行扩展(每个文件将有1个具体的类要读取)。

        @Configuration
        @EnableBatchProcessing
        public abstract class AbstractFileLoader<T> {
    
        private static final String FILE_PATTERN = "*.dat";
    
        @Autowired
        JobBuilderFactory jobs;
    
        @Autowired
        ResourcePatternResolver resourcePatternResolver;
    
        public final Job createJob(Step s1, JobExecutionListener listener) {
            return jobs.get(this.getClass().getSimpleName())
                    .incrementer(new RunIdIncrementer()).listener(listener)
                    .start(s1).build();
        }
    
        public abstract Job loaderJob(Step s1, JobExecutionListener listener);
    
        public abstract FieldSetMapper<T> getFieldSetMapper();
    
        public abstract String getFilesPath();
    
        public abstract String[] getColumnNames();
    
        public abstract int getChunkSize();
    
        public abstract int getThrottleLimit();
    
        @Bean
        @StepScope
        @Value("#{stepExecutionContext['fileName']}")
        public FlatFileItemReader<T> reader(String file) {
            FlatFileItemReader<T> reader = new FlatFileItemReader<T>();
            String path = file.substring(file.indexOf(":") + 1, file.length());
            FileSystemResource resource = new FileSystemResource(path);
            reader.setResource(resource);
            DefaultLineMapper<T> lineMapper = new DefaultLineMapper<T>();
            lineMapper.setFieldSetMapper(getFieldSetMapper());
            DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer(",");
            tokenizer.setNames(getColumnNames());
            lineMapper.setLineTokenizer(tokenizer);
            reader.setLineMapper(lineMapper);
            reader.setLinesToSkip(1);
            return reader;
        }
    
        @Bean
        public ItemProcessor<T, T> processor() {
            // TODO add transformations here
            return null;
        }
    
        @Bean
        @JobScope
        public ListItemWriter<T> writer() {
            ListItemWriter<T> writer = new ListItemWriter<T>();
            return writer;
        }
    
        @Bean
        @JobScope
        public Step readStep(StepBuilderFactory stepBuilderFactory,
                ItemReader<T> reader, ItemWriter<T> writer,
                ItemProcessor<T, T> processor, TaskExecutor taskExecutor) {
    
            final Step readerStep = stepBuilderFactory
                    .get(this.getClass().getSimpleName() + " ReadStep:slave")
                    .<T, T> chunk(getChunkSize()).reader(reader)
                    .processor(processor).writer(writer).taskExecutor(taskExecutor)
                    .throttleLimit(getThrottleLimit()).build();
    
            final Step partitionedStep = stepBuilderFactory
                    .get(this.getClass().getSimpleName() + " ReadStep:master")
                    .partitioner(readerStep)
                    .partitioner(
                            this.getClass().getSimpleName() + " ReadStep:slave",
                            partitioner()).taskExecutor(taskExecutor).build();
    
            return partitionedStep;
    
        }
    
        /*
         * @Bean public TaskExecutor taskExecutor() { return new
         * SimpleAsyncTaskExecutor(); }
         */
    
        @Bean
        @JobScope
        public Partitioner partitioner() {
            MultiResourcePartitioner partitioner = new MultiResourcePartitioner();
            Resource[] resources;
            try {
                resources = resourcePatternResolver.getResources("file:"
                        + getFilesPath() + FILE_PATTERN);
            } catch (IOException e) {
                throw new RuntimeException(
                        "I/O problems when resolving the input file pattern.", e);
            }
            partitioner.setResources(resources);
            return partitioner;
        }
    
        @Bean
        @JobScope
        public JobExecutionListener listener(ListItemWriter<T> writer) {
            return new JobCompletionNotificationListener<T>(writer);
        }
    
        /*
         * Use this if you want the writer to have job scope (JIRA BATCH-2269). Also
         * change the return type of writer to ListItemWriter for this to work.
         */
        @Bean
        public TaskExecutor taskExecutor() {
            return new SimpleAsyncTaskExecutor() {
                @Override
                protected void doExecute(final Runnable task) {
                    // gets the jobExecution of the configuration thread
                    final JobExecution jobExecution = JobSynchronizationManager
                            .getContext().getJobExecution();
                    super.doExecute(new Runnable() {
                        public void run() {
                            JobSynchronizationManager.register(jobExecution);
    
                            try {
                                task.run();
                            } finally {
                                JobSynchronizationManager.close();
                            }
                        }
                    });
                }
            };
        }
    
    }
    

    我们说为了讨论,我必须阅读发票数据。因此,我可以扩展上面的类来创建InvoiceLoader

    @Configuration
    public class InvoiceLoader extends AbstractFileLoader<Invoice>{
    
        private class InvoiceFieldSetMapper implements FieldSetMapper<Invoice> {
    
            public Invoice mapFieldSet(FieldSet f) {
                Invoice invoice = new Invoice();
                invoice.setNo(f.readString("INVOICE_NO");
                return e;
            }
        }
    
        @Override
        public FieldSetMapper<Invoice> getFieldSetMapper() {
            return new InvoiceFieldSetMapper();
        }
    
        @Override
        public String getFilesPath() {
            return "I:/CK/invoices/partitions/";
        }
    
        @Override
        public String[] getColumnNames() {
            return new String[] { "INVOICE_NO", "DATE"};
        }
    
    
        @Override
        @Bean(name="invoiceJob")
        public Job loaderJob(Step s1,
                JobExecutionListener listener) {
            return createJob(s1, listener);
        }
    
        @Override
        public int getChunkSize() {
            return 25254;
        }
    
        @Override
        public int getThrottleLimit() {
            return 8;
        }
    
    }
    

    让我们说我还有一个名为Inventory的课程延伸AbstractFileLoader.

    在应用程序启动时,我可以按如下方式加载这两个注释配置:

    AbstractApplicationContext context1 = new   AnnotationConfigApplicationContext(InvoiceLoader.class, InventoryLoader.class);
    

    在我的应用程序的其他地方,两个不同的线程可以按如下方式启动作业:

    主题1:

        JobLauncher jobLauncher1 = context1.getBean(JobLauncher.class);
        Job job1 = context1.getBean("invoiceJob", Job.class);
        JobExecution jobExecution = jobLauncher1.run(job1, jobParams1);
    

    主题2:

        JobLauncher jobLauncher1 = context1.getBean(JobLauncher.class);
        Job job1 = context1.getBean("inventoryJob", Job.class);
        JobExecution jobExecution = jobLauncher1.run(job1, jobParams1);
    

    这种方法的优点是,每次有一个新文件要读取时,开发人员/用户必须做的就是子类AbstractFileLoader并实现所需的抽象方法,而无需深入了解细节如何组装工作。

    问题:

    1. 我是Spring批处理的新手,所以我可能忽略了这种方法中一些不那么明显的问题,例如Spring批处理中的共享内部对象可能导致两个作业一起运行失败或明显的问题,如范围豆子。
    2. 有没有更好的方法来实现我的目标?
    3. fileName的{​​{1}}属性始终被赋值为@Value("#{stepExecutionContext['fileName']}"),这是I:/CK/invoices/partitions/getPath方法返回的值,即使getPath InvoiceLoader InventoryLoader`返回不同的值。

1 个答案:

答案 0 :(得分:1)

一种选择是将它们作为工作参数传递。例如:

@Bean
Job job() {
    jobs.get("myJob").start(step1(null)).build()
}

@Bean
@JobScope
Step step1(@Value('#{jobParameters["commitInterval"]}') commitInterval) {
    steps.get('step1')
            .chunk((int) commitInterval)
            .reader(new IterableItemReader(iterable: [1, 2, 3, 4], name: 'foo'))
            .writer(writer(null))
            .build()
}

@Bean
@JobScope
ItemWriter writer(@Value('#{jobParameters["writerClass"]}') writerClass) {
    applicationContext.classLoader.loadClass(writerClass).newInstance()
}

使用MyWriter

class MyWriter implements ItemWriter<Integer> {

    @Override
    void write(List<? extends Integer> items) throws Exception {
        println "Write $items"
    }
}

然后执行:

def jobExecution = launcher.run(ctx.getBean(Job), new JobParameters([
        commitInterval: new JobParameter(3),
        writerClass: new JobParameter('MyWriter'), ]))

输出是:

INFO: Executing step: [step1]
Write [1, 2, 3]
Write [4]
Feb 24, 2016 2:30:22 PM org.springframework.batch.core.launch.support.SimpleJobLauncher$1 run
INFO: Job: [SimpleJob: [name=myJob]] completed with the following parameters: [{commitInterval=3, writerClass=MyWriter}] and the following status: [COMPLETED]
Status is: COMPLETED, job execution id 0
  #1 step1 COMPLETED

完整示例here