我在Pivotal Cloud Foundry上部署了一个Spring-cloud-dataflow服务器。在服务器上,运行三个弹簧批处理任务的管道。管道封装在组合任务中。
当我启动执行此组合任务时,compos-task-runner启动第一个批处理作业执行。第一批连接到两个不同的数据源:Spring元数据模式的共享元数据数据源(SCDF,SCT和SB)以及业务数据的业务数据源。数据库是MySQL。第一个任务的执行工作正常,但是当compos-task-runner尝试从任务存储库(元数据数据源)检索任务执行状态时,它会抛出以下异常并停止整个管道:
org.springframework.dao.DeadlockLoserDataAccessException:
PreparedStatementCallback;
SQL [SELECT TASK_EXECUTION_ID, START_TIME, END_TIME, TASK_NAME, EXIT_CODE, EXIT_MESSAGE, ERROR_MESSAGE, LAST_UPDATED, EXTERNAL_EXECUTION_ID, PARENT_EXECUTION_ID from TASK_EXECUTION where TASK_EXECUTION_ID = ?];
(conn:56675) Deadlock found when trying to get lock;
try restarting transaction
Query is: SELECT TASK_EXECUTION_ID, START_TIME, END_TIME, TASK_NAME, EXIT_CODE, EXIT_MESSAGE, ERROR_MESSAGE, LAST_UPDATED, EXTERNAL_EXECUTION_ID, PARENT_EXECUTION_ID from TASK_EXECUTION where TASK_EXECUTION_ID = ?, parameters [2];
nested exception is
java.sql.SQLTransactionRollbackException: (conn:56675) Deadlock found when
trying to get lock; try restarting transaction
Query is: SELECT TASK_EXECUTION_ID, START_TIME, END_TIME, TASK_NAME,EXIT_CODE, EXIT_MESSAGE, ERROR_MESSAGE, LAST_UPDATED, EXTERNAL_EXECUTION_ID, PARENT_EXECUTION_ID from TASK_EXECUTION where TASK_EXECUTION_ID = ?, parameters [2]
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:263)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:649)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:684)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:716)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:726)
at org.springframework.jdbc.core.JdbcTemplate.queryForObject(JdbcTemplate.java:800)
at org.springframework.cloud.task.repository.dao.JdbcTaskExecutionDao.getTaskExecution(JdbcTaskExecutionDao.java:262)
at org.springframework.cloud.task.repository.support.SimpleTaskExplorer.getTaskExecution(SimpleTaskExplorer.java:52)
at org.springframework.cloud.task.app.composedtaskrunner.TaskLauncherTasklet.waitForTaskToComplete(TaskLauncherTasklet.java:146)
at org.springframework.cloud.task.app.composedtaskrunner.TaskLauncherTasklet.execute(TaskLauncherTasklet.java:123)
at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:406)
at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:330)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:133)
at org.springframework.batch.core.s.
我的spring-cloud-task / spring-batch访问多数据源的代码如下:
BatchConfiguration类:
@Profile("!test")
@Configuration
@EnableBatchProcessing
public class BatchJobConfiguration {
@Autowired
private JobBuilderFactory jobBuilderFactory;
[...]
@Bean
public Step step01() {
return stepChargementFoliosBuilder().buildStepChargement();
}
@Bean
public Step step02() {
return stepChargementPretsBuilder().buildStepChargement();
}
@Bean
public Step step03() {
return stepChargementGarantiesBuilder().buildStepChargement();
}
@Bean
public Job job() {
return jobBuilderFactory.get("Spring Batch Job: chargement_donnees_SEM")
.incrementer(new JobParametersIncrementer() {
@Override
public JobParameters getNext(JobParameters parameters) {
return new JobParametersBuilder().addLong("time", System.currentTimeMillis()).toJobParameters();
}
})
.flow(step01())
.on("COMPLETED").to(step02())
.on("COMPLETED").to(step03())
.end()
.build();
}
@Primary
@Bean
public BatchConfigurer batchConfigurer(@Qualifier(JPAConfiguration.METADATA_DATASOURCE) DataSource datasource) {
return new DefaultBatchConfigurer(datasource);
}
任务配置类:
@Profile("!test")
@Configuration
@EnableTask
public class TaskConfiguration {
@Bean
public TaskRepositoryInitializer taskRepositoryInitializer(@Qualifier(JPAConfiguration.METADATA_DATASOURCE) DataSource datasource) {
TaskRepositoryInitializer initializer = new TaskRepositoryInitializer();
initializer.setDataSource(datasource);
return initializer;
}
@Bean
public TaskConfigurer taskConfigurer(@Qualifier(JPAConfiguration.METADATA_DATASOURCE) DataSource datasource) {
return new DefaultTaskConfigurer(datasource);
}
最后,这是JPAConfiguration类:
@Profile("!test")
@Configuration
@EnableTransactionManagement
@EnableJpaRepositories (
basePackages = "com.desjardins.parcourshabitation.chargerprets.repository",
entityManagerFactoryRef = JPAConfiguration.BUSINESS_ENTITYMANAGER,
transactionManagerRef = JPAConfiguration.BUSINESS_TRANSACTION_MANAGER
)
public class JPAConfiguration {
public static final String METADATA_DATASOURCE = "metadataDatasource";
public static final String BUSINESS_DATASOURCE = "businessDatasource";
public static final String BUSINESS_ENTITYMANAGER = "businessEntityManager";
public static final String BUSINESS_TRANSACTION_MANAGER = "businessTransactionManager";
@Primary
@Bean(name=METADATA_DATASOURCE)
public DataSource scdfDatasource() {
return new DatasourceBuilder("scdf-mysql").buildDatasource();
}
@Bean(name=BUSINESS_DATASOURCE)
public DataSource pretsDatasource() {
return new DatasourceBuilder("sem-mysql").buildDatasource();
}
@Bean(name=BUSINESS_ENTITYMANAGER)
public LocalContainerEntityManagerFactoryBean businessEntityManager(EntityManagerFactoryBuilder builder, @Qualifier(BUSINESS_DATASOURCE) DataSource dataSource) {
return builder
.dataSource(dataSource)
.packages("com.desjardins.parcourshabitation.chargerprets.domaine")
.build();
}
@Bean(name = BUSINESS_TRANSACTION_MANAGER)
public PlatformTransactionManager businessTransactionManager(@Qualifier(BUSINESS_ENTITYMANAGER) EntityManagerFactory entityManagerFactory) {
return new JpaTransactionManager(entityManagerFactory);
}
使用的版本:
我尝试使用不同的 interval-time-between-checks 属性设置启动组合任务,但这并不是决定性的。
我上传了一个GitHub存储库,其中包含代码的简约版本,以及如何在自述文件中重现的说明:https://github.com/JLauzonG/deadlock-bug-stackoverflow
任何线索如何解决这个问题?
答案 0 :(得分:0)
在每个子任务的清单中,我已将第二个数据库实例设置为在部署时绑定。但是,当SCDF部署这些任务时,将忽略清单定义的服务。一旦SCDF最初在PCF上部署它们,我必须手动将第二个数据库绑定到每个子任务。如果我将两个数据库实例绑定到服务器环境变量,CTR将继承且有效,它将失败,我认为,这不是一个选项。