我在java中实现了LRU CAche。它完美地运作。我使用了两种数据结构:hashMap用于快速检索现有元素,DoubleLinkedList用于保持节点顺序。但是,我很困惑如何为我的实现提供有效的并发机制?我开始使用锁定概念,但是希望确保快速阅读而不与写入同步,我坚持在这里因为看起来我不能这样做。
你能告诉我如何为我的LRU实现提供并发性,避免对整个缓存进行不合理的锁定吗?以下是我的代码:
public class LRUCacheImpl implements LRUCache {
private final Map<Integer, Node> cacheMap = new ConcurrentHashMap<>();
private final DoubleLinkedList nodeList;
private final int allowedCapacity;
public LRUCacheImpl(int allowedCapacity) {
this.allowedCapacity = allowedCapacity;
nodeList = new DoubleLinkedListImpl(allowedCapacity);
}
@Override
public Node getElement(int value) {
Node toReturn = cacheMap.get(value);
if(toReturn != null){
nodeList.moveExistingToHead(toReturn);
toReturn = new Node(toReturn.getValue());
}
else{
synchronized (this) {
if (allowedCapacity == nodeList.getCurrentSize()) {
cacheMap.remove(nodeList.getTail().getValue());
}
toReturn = new Node(value);
nodeList.addNewAsHead(toReturn);
cacheMap.put(value, toReturn);
}
}
return new Node(toReturn.getValue());
}
List<Node> getCacheState() {
return nodeList.getAllElements();
}
}
public class DoubleLinkedListImpl implements DoubleLinkedList {
private Node head;
private Node tail;
private int currentSize;
private final int allowedCapacity;
public DoubleLinkedListImpl(int allowedCapacity) {
this.currentSize = 0;
this.allowedCapacity = allowedCapacity;
}
@Override
public synchronized int getCurrentSize() {
return currentSize;
}
@Override
public synchronized Node getTail() {
return tail;
}
@Override
public void moveExistingToHead(Node element) {
if(element != null && element != head) {
synchronized (this) {
if(element != null && element != head) {
Node prev = element.getPrev();
Node next = element.getNext();
prev.setNext(next);
if (next != null) {
next.setPrev(prev);
} else {
tail = prev;
}
attachAsHead(element);
}
}
}
}
@Override
public synchronized void addNewAsHead(Node element) {
if(currentSize == 0){
head = tail = element;
currentSize = 1;
}
else if(currentSize < allowedCapacity){
currentSize++;
attachAsHead(element);
}
else{
evictTail();
attachAsHead(element);
}
}
private synchronized void attachAsHead(Node element) {
element.setPrev(null);
element.setNext(head);
head.setPrev(element);
head = element;
}
@Override
public synchronized List<Node> getAllElements() {
List<Node> nodes = new LinkedList<>();
Node tmp = head;
while(tmp != null){
nodes.add(new Node(tmp.getValue()));
tmp = tmp.getNext();
}
return nodes;
}
private synchronized void evictTail(){
tail = tail.getPrev();
tail.setNext(null);
currentSize--;
}
}
public class Node {
private int value;
private Node prev;
private Node next;
// getters and setters ommited
}
答案 0 :(得分:0)
根据@BenManes的链接,我看到在实现缓存的经典方法中,当我们使用HashMap和DoubleLinkedList时,不可能与并发匹配。在这种情况下,只能实现同步版本。目前我使用ConcurrentHashMap中的WeakReference来存储节点(@Marko Topolnik - 你确定你想使用AtomicReference吗?仍然无法通过你的方式)我的算法的新版本。恕我直言,这使我能够避免在获取现有元素时同步读取和写入,因为从列表中删除尾部(逐出)将自动从哈希映射中删除节点。仍需要关闭列表方法的课程同步。这个解决方案唯一的弱点是我们无法控制GC,因此可能从hashMap获取陈旧数据...
据我了解,要使LRU缓存并发,我们需要更改实现如下(几种可能性):