我正在使用Hazelcast 2.0.1频繁更新数据(大约2分钟),其中包括首先删除然后从数据库加载数据。但是在某个地方,线程中的一个线程会锁定一个键,这会阻止删除操作并引发异常(java.util.ConcurrentModificationException: Another thread holds a lock for the key: abc@gmail.com
)。请帮助我在hazelcast中更新我的地图。
我在下面提供我的代码
DeltaParallelizer
def customerDetails = dataOperations.getDistributedStore(DataStructures.customer_project.name()).keySet()
ExecutorService service = Hazelcast.getExecutorService()
def result
try{
customerDetails?.each{customerEmail->
log.info String.format('Creating delta task for customer:%s',customerEmail)
def dTask = new DistributedTask(new EagerDeltaTask(customerEmail))
service.submit(dTask);
}
customerDetails?.each {customerEmail ->
log.info String.format('Creating task customer aggregation for %s',customerEmail)
def task = new DistributedTask(new EagerCustomerAggregationTask(customerEmail))
service.submit(task)
}
}
catch(Exception e){
e.printStackTrace()
}
EagerDeltaTask
class EagerDeltaTask implements Callable,Serializable {
private final def emailId
EagerDeltaTask(email){
emailId = email
}
@Override
public Object call() throws Exception {
log.info(String.format("Eagerly computing delta for %s",emailId))
def dataOperations = new DataOperator()
def tx = Hazelcast.getTransaction()
tx.begin()
try{
deleteAll(dataOperations)
loadAll(dataOperations)
tx.commit()
}
catch(Exception e){
tx.rollback()
log.error(String.format('Delta computation is screwed while loading data for the project:%s',emailId),e)
}
}
private void deleteAll(dataOperations){
log.info String.format('Deleting entries for customer %s',emailId)
def projects = dataOperations.getDistributedStore(DataStructures.customer_project.name()).get(emailId)
projects?.each{project->
log.info String.format('Deleting entries for project %s',project[DataConstants.PROJECT_NUM.name()])
def srs = dataOperations.srs(project[DataConstants.PROJECT_NUM.name()])?.collect{it[DataConstants.SR_NUM.name()]}
def activitiesStore = dataOperations.getDistributedStore(DataStructures.sr_activities.name())
srs?.each{sr ->
activitiesStore.remove(sr)
}
dataOperations.getDistributedStore(DataStructures.project_sr_aggregation.name()).remove(project[DataConstants.PROJECT_NUM.name()])
}
dataOperations.getDistributedStore(DataStructures.customer_project.name()).remove(emailId)
}
private void loadAll(dataOperations){
log.info(String.format('Loading entries for customer %s',emailId))
def projects = dataOperations.projects(emailId)
projects?.each{project->
log.info String.format('Loading entries for project %s',project[DataConstants.PROJECT_NUM.name()])
def srs = dataOperations.srs(project[DataConstants.PROJECT_NUM.name()])
srs?.each{sr->
dataOperations.activities(sr[DataConstants.SR_NUM.name()])
}
}
}
}
DataOperator
class DataOperator {
def getDistributedStore(String name){
Hazelcast.getMap(name)
}
}
我在deleteAll srs中遇到异常,因此删除了一些地图内容,并且只为内容被删除且地图的其余部分包含旧数据的地图加载新数据。所以我没有在Hazelcast地图中获取更新数据。请建议您如何将更新数据导入Hazelcast地图。
这个Hazelcast.getTransaction
客户端也可以用于此目的吗?
注意:客户可以拥有多个project_num,1个project_num也可以由多个客户共享 1个project_num可以有多个SR_NUM
答案 0 :(得分:3)
我使用Hazelcast eviction policy 解决我的问题。我使用<time-to-live-seconds>300</time-to-live-seconds>
每5分钟清除一次地图内容,当任何地图的UI发出请求时,它会从加载程序重新加载该地图内容。
以下是Hazelcast地图配置之一
...
<map name="customer_project" >
<map-store enabled="true">
<class-name>com.abc.arena.datagrid.loader.CustomerProjectData</class-name>
</map-store>
<time-to-live-seconds>300</time-to-live-seconds>
</map>
...
CustomerProjectData
加载器类只是将数据从数据库加载到地图中。所以现在我不再需要DeltaParallelizer
或EagerDeltaTask
类
也欢迎不同的方法:)