第二步不插入数据

时间:2015-11-17 07:22:20

标签: spring spring-batch

在下面的代码中,我为作业定义了两个步骤,其中每个步骤都从不同的csv中读取数据。这里第一步的数据插入到数据库中,第二步不是在数据库中插入数据。你能帮忙指出错误吗

Public TimeToRun As Date 'so that TimeToRun can be used in both the functions

Sub RefreshAction()
    Range("b7").Select
    Application.Run "RefreshCurrentSelection"
    DoEvents
    'Store the next date of execution in TimeToRun
    TimeToRun = Now() + TimeValue("00:00:05")
    'Launch the next OnTime
    Application.OnTime TimeToRun, "RefreshAllStaticData"
End Sub


Sub RefreshAllStaticData()

'--++-- Place your code here, as it is now --++--

'----Call RefreshAction to reset the OnTime method
'---------to another 5 seconds and keep "looping"
RefreshAction

End Sub


Sub Kill_OnTime()
'Launch this to stop the OnTime method
Application.OnTime _
    earliesttime:=TimeToRun, _
    procedure:="RefreshAllStaticData", _
    schedule:=False
End Sub

}

这是我在控制台中看到的。第一个csv上存在解析错误,因为13834行和之后没有记录。但是第一个csv中的记录已成功插入到数据库中,因此可以忽略猜测此解析错误。想知道读者,作家,步骤&已为第二个csv正确定义了作业。

控制台:

@Configuration
@EnableBatchProcessing
public class MacroSimulatorConfiguration {

@Autowired
private JobBuilderFactory jobs;

@Autowired
private StepBuilderFactory steps;

@Bean
public ItemReader<Consumption> reader() {
    FlatFileItemReader<Consumption> reader = new FlatFileItemReader<Consumption>();
    reader.setResource(new ClassPathResource("datacons.csv"));
    reader.setLinesToSkip(1);
    reader.setLineMapper(new DefaultLineMapper<Consumption>() {
        {
            setLineTokenizer(new DelimitedLineTokenizer() {
                {
                    setNames(new String[] { "tradeCommodity", "hou", "region", "dir", "purchValue", "value" });
                }
            });
            setFieldSetMapper(new BeanWrapperFieldSetMapper<Consumption>() {
                {
                    setTargetType(Consumption.class);
                }
            });
        }
    });
    return reader;
}

@Bean
public ItemReader<Gdp> reader1() {
    FlatFileItemReader<Gdp> reader1 = new FlatFileItemReader<Gdp>();
    reader1.setResource(new ClassPathResource("datagdp.csv"));
    reader1.setLinesToSkip(1);
    reader1.setLineMapper(new DefaultLineMapper<Gdp>() {
        {
            setLineTokenizer(new DelimitedLineTokenizer() {
                {
                    setNames(new String[] { "region", "gdpExpend", "value" });
                }
            });
            setFieldSetMapper(new BeanWrapperFieldSetMapper<Gdp>() {
                {
                    setTargetType(Gdp.class);
                }
            });
        }
    });
    return reader1;
}

@Bean
public ItemWriter<Consumption> writer(DataSource dataSource) {
    JdbcBatchItemWriter<Consumption> writer = new JdbcBatchItemWriter<Consumption>();
    writer.setItemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<Consumption>());
    writer.setSql("INSERT INTO INPUT_CONSUMPTION (TRAD_COMM, HOU, SUB_REGION, INCOME_GROUP, CITIZEN_STATUS, REGION, DIR, PURCHVALUE, VAL) "
            + "VALUES (:tradeCommodity, :hou, :subRegion, :incomeGroup, :citizenStatus, :region, :dir, :purchValue, :value)");
    writer.setDataSource(dataSource);
    return writer;
}

@Bean
public ItemWriter<Gdp> writer1(DataSource dataSource) {
    JdbcBatchItemWriter<Gdp> writer1 = new JdbcBatchItemWriter<Gdp>();
    writer1.setItemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<Gdp>());
    writer1.setSql("INSERT INTO input_gdp (REGION, GDPEXPEND, VAL) " + "VALUES (:region, :gdpExpend, :value)");
    writer1.setDataSource(dataSource);
    return writer1;
}

@Bean
public Job importJob(Step s1, Step s2) {
    return jobs.get("importJob").incrementer(new RunIdIncrementer()).start(s1).next(s2).build();
}

@Bean(name = "s1")
public Step step1(ItemReader<Consumption> reader, ItemWriter<Consumption> writer) {
    return steps.get("step1").<Consumption, Consumption>chunk(100).reader(reader).writer(writer).build();
}

@Bean(name = "s2")
public Step step2(ItemReader<Gdp> reader1, ItemWriter<Gdp> writer1) {
    return steps.get("step2").<Gdp, Gdp>chunk(1).reader(reader1).writer(writer1).build();
}

1 个答案:

答案 0 :(得分:2)

从我在控制台输出中看到的,你的step2根本没有被执行。

这是正常的Spring Batch行为:如果在步骤中遇到“non-skippable”错误,则该步骤将以状态FAILED终止,因此作业将使您具有{{1明确地防止默认行为并调用另一个步骤。

此外,您可能想知道为什么您仍然在数据库中插入记录,这是因为Spring Batch根据您定义的.on("FAILED")提交记录。由于您将其设置为1,因此将在提交错误的记录之前提交每个记录。

所以,这里有3个解决方案:

  • 防止文件中的解析错误
  • 添加可跳过的异常类(ParseException或simple commit-interval)。这将告诉Spring Batch忽略错误并继续读取文件。
  • 在第一步和第二步之间明确声明转换java.lang.Exception以启动第二步,即使第一步失败也是如此。第一个文件只会被读取到第一个错误,然后将读取第二个文件。