我使用ConcurrentLinkedHashMap
实现了LRU缓存。在同一张地图中,如果我的地图达到特定限制,我将清除事件,如下所示。
我有一个MAX_SIZE
变量,相当于3.7 GB,一旦我的地图达到了这个限制,我就会从我的地图中清除事件。
以下是我的代码:
import java.util.concurrent.ConcurrentMap;
import com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap;
import com.googlecode.concurrentlinkedhashmap.EvictionListener;
// does this really equal to 3.7 GB? can anyone explain this?
public static final int MAX_SIZE = 20000000; //equates to ~3.7GB with assumption that each event is 200 bytes AVG
public static EvictionListener<String, DataObject> listener = new EvictionListener<String, DataObject>() {
public void onEviction(String key, DataObject value) {
deleteEvents();
}
};
public static final ConcurrentMap<String, DataObject> holder = new ConcurrentLinkedHashMap.Builder<String, DataObject>()
.maximumWeightedCapacity(MAX_SIZE).listener(listener).build();
private static void deleteEvents() {
int capacity = MAX_SIZE - (MAX_SIZE * (20 / 100));
if (holder.size() >= capacity) {
int numEventsToEvict = (MAX_SIZE * 20) / 100;
int counter = 0;
Iterator<String> iter = holder.keySet().iterator();
while (iter.hasNext() && counter < numEventsToEvict) {
String address = iter.next();
holder.remove(address);
System.out.println("Purging Elements: " +address);
counter++;
}
}
}
// this method is called every 30 seconds from a single background thread
// to send data to our queue
public void submit() {
if (holder.isEmpty()) {
return;
}
// some other code here
int sizeOfMsg = 0;
Iterator<String> iter = holder.keySet().iterator();
int allowedBytes = MAX_ALLOWED_SIZE - ALLOWED_BUFFER;
while (iter.hasNext() && sizeOfMsg < allowedBytes) {
String key = iter.next();
DataObject temp = holder.get(key);
// some code here
holder.remove(key);
// some code here to send data to queue
}
}
// this holder map is used in below method to add the events into it.
// below method is being called from some other place.
public void addToHolderRequest(String key, DataObject stream) {
holder.put(key, stream);
}
以下是我正在使用的maven依赖:
<dependency>
<groupId>com.googlecode.concurrentlinkedhashmap</groupId>
<artifactId>concurrentlinkedhashmap-lru</artifactId>
<version>1.4</version>
</dependency>
我不确定这是否是正确的方法?如果事件平均为200字节,那么MAX_SIZE
是否真的等于3.7 GB?有没有更好的方法来做到这一点?我还有一个后台线程,每隔30秒调用deleteEvents()
方法,同一个后台线程也调用submit
方法从holder
映射中提取数据并发送到队列。
所以想法是,在holder
方法中向addToHolderRequest
地图添加事件,然后每隔30秒调用submit
方法从后台添加事件,这将通过迭代此地图将数据发送到我们的队列然后在提交方法完成后,从相同的后台线程调用deleteEvents()
方法,这将清除元素。我在生产中运行此代码,看起来它没有正确清除事件,我的持有者地图大小不断增长。我有一个最小/最大堆内存设置为6GB。
答案 0 :(得分:2)