我的项目要求我要处理固定长度文件中的数据。数据文件包含一个标题行和许多详细信息行。标题行包含摘要&巩固细节线的信息。例如报告期,报告雇主,总金额等。详细信息行包含每个员工的信息。例如员工贡献,贡献期等。从不同的雇主那里收到的许多数据文件需要由工作处理。
因此,我创建了一个带有以下读者和一步的工作。作家和其他自定义类 一个。 MultiResourceItemReade读取文件夹中的所有文件。 湾FlatFileItemReader读取每个文件。从MultiResourceItemReader委派。 C。我正在跳过第一行并处理LineCallbackHandler d。我能够解析标题行并将其转换为Report对象。 即我使用DefaultLineMapper和BeanWrapperFieldSetMapper来解析细节线并转换为MemberRecrod对象。
我需要帮助实现以下使用弹簧批次。
我还需要建议如何使用Spring Batch实现以下功能。
作业xml文件。
<bean id="erLoadFolderReader" class="org.springframework.batch.item.file.MultiResourceItemReader" scope="step">
<property name="resources" value="#{jobParameters['FILE_NAME']}" />
<property name="delegate" ref="erLoadFileReader" />
<property name="saveState" value="false" />
</bean>
<bean id ="memberRecordHeaderLineHandler" class="com.htcinc.rs.batch.infrastructure.erLoadJob.MemberRecordHeaderLineHandler" />
<bean id="erLoadFileReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="saveState" value="false" />
<property name="resource" value="#{jobParameters['FILE_NAME']}" />
<property name="linesToSkip" value="1" />
<property name="skippedLinesCallback">
<bean class="com.htcinc.rs.batch.infrastructure.erLoadJob.MemberRecordHeaderLineHandler">
<property name="wcReportService" ref="wcReportService" />
<property name="names" value="empNo,planCode,startDate,endDate,totalEmprContrb,totalEmplContrb,reportType" />
<property name="headerTokenizer">
<bean class="org.springframework.batch.item.file.transform.FixedLengthTokenizer">
<property name="names" value="organizationCode,planCode,beginDate,endDate,totalEmployerContribution,totaEmployeeContribution,reportingType"></property>
<property name="columns" value="1-9,10-17,18-25,26-33,34-48,49-63,64-67" />
</bean>
</property>
</bean>
</property>
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.FixedLengthTokenizer">
<property name="names" value="ssn,firstName,lastName,middleName,birthDateText,genderCode,addressStartDateText,addrLine1,addrLine2,addrLine3,city,state,zip,zipPlus,wagesText,employerContributionText,employeeContributionText,recordType,startDateText,endDateText,serviceCreditDaysText,serviceCreditHoursText,jobClassCode,positionChangeDateText,hireDateText,terminationDateText,notes" />
<property name="columns" value="1-9,10-29,30-59,60-79,80-87,88-88,89-96,97-126,127-146,147-166,167-181,182-183,184-188,189-192,193-205,206-214,215-223,224-227,228-235,236-243,244-246,247-251,252-255,256-263,264-271,272-279,280-479" />
</bean>
</property>
<property name="fieldSetMapper">
<bean
class="org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper">
<property name="targetType"
value="com.htcinc.rs.domain.batch.MemberRecord" />
</bean>
</property>
</bean>
</property>
</bean>
<bean id="memberRecordItemWriter" class="com.htcinc.rs.batch.infrastructure.erLoadJob.MemberRecordItemWriter" />
<bean id="memberRecordItemProcessor" class="com.htcinc.rs.batch.infrastructure.erLoadJob.MemberRecordItemProcessor" />
<batch:job id="erLoadJob">
<batch:step id="erLoadJob_step1">
<batch:tasklet>
<batch:chunk reader="erLoadFolderReader" writer="memberRecordItemWriter" processor="memberRecordItemProcessor" commit-interval="1" />
</batch:tasklet>
<batch:listeners>
<batch:listener ref="memberRecordHeaderLineHandler"/>
</batch:listeners>
</batch:step>
</batch:job>
</beans>
MemberRecordHeaderLineHandler.Java文件
private WCReportServiceDefaultImpl wcReportService;
private WCReport wcReport;
private JobExecution jobExecution;
private LineTokenizer headerTokenizer;
private String names;
public WCReportServiceDefaultImpl getWcReportService() {
return wcReportService;
}
public void setWcReportService(WCReportServiceDefaultImpl wcReportService) {
this.wcReportService = wcReportService;
}
public LineTokenizer getHeaderTokenizer() {
return headerTokenizer;
}
public void setHeaderTokenizer(LineTokenizer headerTokenizer) {
this.headerTokenizer = headerTokenizer;
}
public String getNames() {
return names;
}
public void setNames(String names) {
this.names = names;
}
@Override
public void handleLine(String headerLine) {
FieldSet fs = getHeaderTokenizer().tokenize(headerLine);
String datePattern = "MMddyyyy";
Date defaultDate = Utility.getDefaultDate();
try {
wcReport = wcReportService.getWCReport(Integer.toString(fs.readInt("organizationCode")), fs.readString("planCode"),fs.readDate("beginDate", datePattern, defaultDate), fs.readDate("endDate", datePattern, defaultDate), fs.readString("reportingType"));
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if(jobExecution != null && wcReport != null) {
ExecutionContext jobContext = jobExecution.getExecutionContext();
jobContext.put("WCREPORT_OBJECT", wcReport);
}
}
@Override
public void beforeStep(StepExecution stepExecution) {
this.jobExecution = stepExecution.getJobExecution();
}
MemberRecordItemWriter.java文件
private int iteration = 0;
private JobExecution jobExecution;
@Override
public void write(List<? extends MemberRecord> records) throws Exception {
System.out.println("Iteration-" + iteration++);
Object wcReport = jobExecution.getExecutionContext().get("WCREPORT_OBJECT");
for (MemberRecord mr : records) {
//System.out.println(header);
System.out.println(mr.getLastName());
}
}
@BeforeStep
public void beforeStep(StepExecution stepExecution) {
this.jobExecution = stepExecution.getJobExecution();
}
谢谢, 维杰
答案 0 :(得分:1)
在处理来自ftp的传入文件时,您需要集成Spring Integration和Spring Batch来创建基于事件的系统。