我想创建一个有效的LRU缓存实现。我发现最方便的方法是使用LinkedHashMap
但不幸的是,如果很多线程使用缓存,它会很慢。我的实现在这里:
/**
* Class provides API for FixedSizeCache.
* Its inheritors represent classes
* with concrete strategies
* for choosing elements to delete
* in case of cache overflow. All inheritors
* must implement {@link #getSize(K, V)}.
*/
public abstract class FixedSizeCache <K, V> implements ICache <K, V> {
/**
* Current cache size.
*/
private int currentSize;
/**
* Maximum allowable cache size.
*/
private int maxSize;
/**
* Number of {@link #get(K)} queries for which appropriate {@code value} was found.
*/
private int keysFound;
/**
* Number of {@link #get(K)} queries for which appropriate {@code value} was not found.
*/
private int keysMissed;
/**
* Number {@code key-value} associations that were deleted from cache
* because of cash overflow.
*/
private int erasedCount;
/**
* Basic data structure LinkedHashMap provides
* convenient way for designing both types of cache:
* LRU and FIFO. Depending on its constructor parameters
* it can represent either of FIFO or LRU HashMap.
*/
private LinkedHashMap <K, V> entries;
/**
* If {@code type} variable equals {@code true}
* then LinkedHashMap will represent LRU HashMap.
* And it will represent FIFO HashMap otherwise.
*/
public FixedSizeCache(int maxSize, boolean type) {
if (maxSize <= 0) {
throw new IllegalArgumentException("int maxSize parameter must be greater than 0");
}
this.maxSize = maxSize;
this.entries = new LinkedHashMap<K, V> (0, 0.75f, type);
}
/**
* Method deletes {@code key-value} associations
* until current cache size {@link #currentSize} will become
* less than or equal to maximum allowable
* cache size {@link #maxSize}
*/
private void relaxSize() {
while (currentSize > maxSize) {
// The strategy for choosing entry with the lowest precedence
// depends on {@code type} variable that was used to create {@link #entries} variable.
// If it was created with constructor LinkedHashMap(int size,double loadfactor, boolean type)
// with {@code type} equaled to {@code true} then variable {@link #entries} represents
// LRU LinkedHashMap and iterator of its entrySet will return elements in order
// from least recently used to the most recently used.
// Otherwise, if {@code type} equaled to {@code false} then {@link #entries} represents
// FIFO LinkedHashMap and iterator will return its entrySet elements in FIFO order -
// from oldest in the cache to the most recently added.
Map.Entry <K, V> entryToDelete = entries.entrySet().iterator().next();
if (entryToDelete == null) {
throw new IllegalStateException(" Implemented method int getSize(K key, V value) " +
" returns different results for the same arguments.");
}
entries.remove(entryToDelete.getKey());
currentSize -= getAssociationSize(entryToDelete.getKey(), entryToDelete.getValue());
erasedCount++;
}
if (currentSize < 0) {
throw new IllegalStateException(" Implemented method int getSize(K key, V value) " +
" returns different results for the same arguments.");
}
}
/**
* All inheritors must implement this method
* which evaluates the weight of key-value association.
* Sum of weights of all key-value association in the cache
* equals to {@link #currentSize}.
* But developer must ensure that
* implementation will satisfy two conditions:
* <br>1) method always returns non negative integers;
* <br>2) for every two pairs {@code key-value} and {@code key_1-value_1}
* if {@code key.equals(key_1)} and {@code value.equals(value_1)} then
* {@code getSize(key, value)==getSize(key_1, value_1)};
* <br> Otherwise cache can work incorrectly.
*/
protected abstract int getSize(K key, V value);
/**
* Helps to detect if the implementation of {@link #getSize(K, V)} method
* can return negative values.
*/
private int getAssociationSize(K key, V value) {
int entrySize = getSize(key, value);
if (entrySize < 0 ) {
throw new IllegalStateException("int getSize(K key, V value) method implementation is invalid. It returned negative value.");
}
return entrySize;
}
/**
* Returns the {@code value} corresponding to {@code key} or
* {@code null} if {@code key} is not present in the cache.
* Increases {@link #keysFound} if finds a corresponding {@code value}
* or increases {@link #keysMissed} otherwise.
*/
public synchronized final V get(K key) {
if (key == null) {
throw new NullPointerException("K key is null");
}
V value = entries.get(key);
if (value != null) {
keysFound++;
return value;
}
keysMissed++;
return value;
}
/**
* Removes the {@code key-value} association, if any, with the
* given {@code key}; returns the {@code value} with which it
* was associated, or {@code null}.
*/
public synchronized final V remove(K key) {
if (key == null) {
throw new NullPointerException("K key is null");
}
V value = entries.remove(key);
// if appropriate value was present in the cache than decrease
// current size of cache
if (value != null) {
currentSize -= getAssociationSize(key, value);
}
return value;
}
/**
* Adds or replaces a {@code key-value} association.
* Returns the old {@code value} if the
* {@code key} was present; otherwise returns {@code null}.
* If after insertion of a {@code key-value} association
* to cache its size becomes greater than
* maximum allowable cache size then it calls {@link #relaxSize()} method which
* releases needed free space.
*/
public synchronized final V put(K key, V value) {
if (key == null || value == null) {
throw new NullPointerException("K key is null or V value is null");
}
currentSize += getAssociationSize(key, value);
value = entries.put(key, value);
// if key was not present then decrease cache size
if (value != null) {
currentSize -= getAssociationSize(key, value);
}
// if cache size with new entry is greater
// than maximum allowable cache size
// then get some free space
if (currentSize > maxSize) {
relaxSize();
}
return value;
}
/**
* Returns current size of cache.
*/
public synchronized int currentSize() {
return currentSize;
}
/**
* Returns maximum allowable cache size.
*/
public synchronized int maxSize() {
return maxSize;
}
/**
* Returns number of {@code key-value} associations that were deleted
* because of cache overflow.
*/
public synchronized int erasedCount() {
return erasedCount;
}
/**
* Number of {@link #get(K)} queries for which appropriate {@code value} was found.
*/
public synchronized int keysFoundCount() {
return keysFound;
}
/**
* Number of {@link #get(K)} queries for which appropriate {@code value} was not found.
*/
public synchronized int keysMissedCount() {
return keysMissed;
}
/**
* Removes all {@code key-value} associations
* from the cache. And turns {@link #currentSize},
* {@link #keysFound}, {@link #keysMissed} to {@code zero}.
*/
public synchronized void clear() {
entries.clear();
currentSize = 0;
keysMissed = 0;
keysFound = 0;
erasedCount = 0;
}
/**
* Returns a copy of {@link #entries}
* that has the same content.
*/
public synchronized LinkedHashMap<K, V> getCopy() {
return new LinkedHashMap<K, V> (entries);
}
}
这个实现很慢(因为同步)如果我们有很多线程试图调用let say get()
方法。有没有更好的办法?
答案 0 :(得分:5)
我不知道这是否有用,但是如果您可以用ConcurrentHashMap替换LinkedHashMap
,那么您将提高吞吐量 - ConcurrentHashMap
使用分片来允许多个同时读者和作家。它也是线程安全的,因此您不需要同步您的读者和作者。
除此之外,请将synchronized
关键字替换为ReadWriteLock。这将允许多个同时读者。
答案 1 :(得分:4)
尽量不重新实现可用内容:Guava Caches。它具有您需要的几乎所有功能:基于大小的驱逐,并发,加权。如果它符合您的需求,请使用它。如果不是尝试实施你的,但总是先评估(在我看来)。只是一个建议。
答案 2 :(得分:1)
您需要像这样运行性能测试
Map<Object, Object> map = Collections.synchronizedMap(new LinkedHashMap<Object, Object>(16, 0.7f, true) {
@Override
protected boolean removeEldestEntry(Map.Entry<Object, Object> eldest) {
return size() > 1000;
}
});
Integer[] values = new Integer[10000];
for (int i = 0; i < values.length; i++)
values[i] = i;
long start = System.nanoTime();
for (int i = 0; i < 1000; i++) {
for (int j = 0; j < values.length; j++) {
map.get(values[j]);
map.get(values[j / 2]);
map.get(values[j / 3]);
map.get(values[j / 4]);
map.put(values[j], values[j]);
}
}
long time = System.nanoTime() - start;
long rate = (5 * values.length * 1000) * 1000000000L / time;
System.out.printf("Performed get/put operations at a rate of %,d per second%n", rate);
在我的2.5 GHz i5笔记本电脑上打印
Performed get/put operations at a rate of 27,170,035 per second
您需要每秒多少次操作?
答案 3 :(得分:1)
如前所述,麻烦的主要原因是更新LRU算法中的共享数据结构。为了克服这个问题,您可以使用分区,或者使用另一种逐出算法然后使用LRU。有现代算法比LRU表现更好。请参阅cache2k benchmarks page上有关该主题的比较。
cache2k逐出实现CLOCK和CLOCK-Pro具有完全读取并发性而无需锁定。
答案 4 :(得分:0)
LRU方案本身就涉及对共享结构的独占修改。因此,争论已经给出,你无能为力。
如果您不需要严格的LRU并且可以容忍驱逐政策的某些不一致,那么这些东西会查找并且更明亮。您的条目(值包装器)需要一些使用情况统计信息,您需要一些基于所述使用情况统计信息的过期策略。
然后,您可以使用基于ConcurrentSkipListMap
的LRU相似结构(即您可能将其视为数据库索引),当缓存即将过期时使用该索引并使基于它的元素到期。你需要仔细检查等等,但它是可行的。
更新索引是免费的,但可以扩展。请记住,ConcurrentSkipListMap.size()
是一项昂贵的操作O(n),所以你不应该依赖它们。实现并不难,但也不简单,除非你有足够的争用(核心)同步(LHM)可能是最简单的方法。