所以我认为我有这个天才的想法来解决一个非常具体的问题,但我无法摆脱最后一个潜在的线程安全问题。我想知道你们是否有想法解决它。
问题:
大量的线程需要从很少更新的HashMap中读取。问题是在ConcurrentHashMap中,即线程安全版本,read方法仍然有可能触及互斥锁,因为write方法仍然锁定bin(即地图的各个部分)。
这个想法:
有2个隐藏的HashMaps作为一个...一个用于线程读取而没有同步,另一个用于线程写入,当然同步,每隔一段时间,翻转它们。
显而易见的警告是,地图最终只是一致的,但我们假设这对于它的预期目的来说已经足够了。
但是出现的问题是它仍然会打开一个竞争条件,即使在使用AtomicInteger之类的情况下也是如此,因为正好在发生翻转时,我无法确定读者是否没有进入......问题是在startRead()方法的第262-272行和flip()方法的第241-242行之间。
显然ConcurrentHashMap是一个非常好的类用于解决这个问题,我只是想看看我是否可以进一步推动这个想法。
有人有什么想法吗?
这是该课程的完整代码。 (没有完全调试/测试,但你明白了......)
package org.nectarframework.base.tools;
import java.util.Collection;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
/**
*
* This map is intended to be both thread safe, and have (mostly) non mutex'd
* reads.
*
* HOWEVER, if you insert something into this map, and immediately try to read
* the same key from the map, it probably won't give you the result you expect.
*
* The idea is that this map is in fact 2 maps, one that handles writes, the
* other reads, and every so often the two maps switch places.
*
* As a result, this map will be eventually consistent, and while writes are
* still synchronized, reads are not.
*
* This map can be very effective if handling a massive number of reads per unit
* time vs a small number of writes per unit time, especially in a massively
* multithreaded use case.
*
* This class isn't such a good idea because it's possible that between
* readAllowed.get() and readCounter.increment(), the flip() happens,
* potentially sending one or more threads on the Map that flip() is about to
* update. The solution would be an
* AtomicInteger.compareGreaterThanAndIncrement(), but that doesn't exist.
*
*
* @author schuttek
*
*/
public class DoubleBufferHashMap<K, V> implements Map<K, V> {
private Map<K, V> readMap = new HashMap<>();
private Map<K, V> writeMap = new HashMap<>();
private LinkedList<Triple<Operation, Object, V>> operationList = new LinkedList<>();
private AtomicBoolean readAllowed = new AtomicBoolean(true);
private AtomicInteger readCounter = new AtomicInteger(0);
private long lastFlipTime = System.currentTimeMillis();
private long flipTimer = 3000; // 3 seconds
private enum Operation {
Put, Delete;
}
@Override
public int size() {
startRead();
RuntimeException rethrow = null;
int n = 0;
try {
n = readMap.size();
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return n;
}
@Override
public boolean isEmpty() {
startRead();
RuntimeException rethrow = null;
boolean b = false;
try {
b = readMap.isEmpty();
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return b;
}
@Override
public boolean containsKey(Object key) {
startRead();
RuntimeException rethrow = null;
boolean b = false;
try {
b = readMap.containsKey(key);
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return b;
}
@Override
public boolean containsValue(Object value) {
startRead();
RuntimeException rethrow = null;
boolean b = false;
try {
b = readMap.containsValue(value);
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return b;
}
@Override
public V get(Object key) {
startRead();
RuntimeException rethrow = null;
V v = null;
try {
v = readMap.get(key);
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return v;
}
@Override
public synchronized V put(K key, V value) {
operationList.add(new Triple<>(Operation.Put, key, value));
writeMap.put(key, value);
return value;
}
@Override
public synchronized V remove(Object key) {
// Not entirely sure if we should return the value from the read map or
// the write map...
operationList.add(new Triple<>(Operation.Delete, key, null));
V v = writeMap.remove(key);
endRead();
return v;
}
@Override
public synchronized void putAll(Map<? extends K, ? extends V> m) {
for (K k : m.keySet()) {
V v = m.get(k);
operationList.add(new Triple<>(Operation.Put, k, v));
writeMap.put(k, v);
}
checkFlipTimer();
}
@Override
public synchronized void clear() {
writeMap.clear();
checkFlipTimer();
}
@Override
public Set<K> keySet() {
startRead();
RuntimeException rethrow = null;
Set<K> sk = null;
try {
sk = readMap.keySet();
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return sk;
}
@Override
public Collection<V> values() {
startRead();
RuntimeException rethrow = null;
Collection<V> cv = null;
try {
cv = readMap.values();
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
return cv;
}
@Override
public Set<java.util.Map.Entry<K, V>> entrySet() {
startRead();
RuntimeException rethrow = null;
Set<java.util.Map.Entry<K, V>> se = null;
try {
se = readMap.entrySet();
} catch (RuntimeException t) {
rethrow = t;
}
endRead();
if (rethrow != null) {
throw rethrow;
}
endRead();
return se;
}
private void checkFlipTimer() {
long now = System.currentTimeMillis();
if (this.flipTimer > 0 && now > this.lastFlipTime + this.flipTimer) {
flip();
this.lastFlipTime = now;
}
}
/**
* Flips the two maps, and updates the map that was being read from to the
* latest state.
*/
@SuppressWarnings("unchecked")
private synchronized void flip() {
readAllowed.set(false);
while (readCounter.get() != 0) {
Thread.yield();
}
Map<K, V> temp = readMap;
readMap = writeMap;
writeMap = temp;
readAllowed.set(true);
this.notifyAll();
for (Triple<Operation, Object, V> t : operationList) {
switch (t.getLeft()) {
case Delete:
writeMap.remove(t.getMiddle());
break;
case Put:
writeMap.put((K) t.getMiddle(), t.getRight());
break;
}
}
}
private void startRead() {
if (!readAllowed.get()) {
synchronized (this) {
try {
wait();
} catch (InterruptedException e) {
}
}
}
readCounter.incrementAndGet();
}
private void endRead() {
readCounter.decrementAndGet();
}
}
答案 0 :(得分:1)
我强烈建议您学习如何使用JMH,这是您在优化算法和数据结构的道路上应该学习的第一件事。
例如,如果您知道如何使用它,则可以快速找到当只有10%的写入ConcurrentHashMap
执行非常接近非同步HashMap
时。
4个主题(10%写入):
Benchmark Mode Cnt Score Error Units
SO_Benchmark.concurrentMap thrpt 2 69,275 ops/s
SO_Benchmark.usualMap thrpt 2 78,490 ops/s
8个主题(10%写入):
Benchmark Mode Cnt Score Error Units
SO_Benchmark.concurrentMap thrpt 2 93,721 ops/s
SO_Benchmark.usualMap thrpt 2 100,725 ops/s
写入率ConcurrentHashMap
越小,性能往往更接近HashMap
。
现在我修改了您的startRead
和endRead
,并使它们无法使用,但非常简单:
private void startRead() {
readCounter.incrementAndGet();
readAllowed.compareAndSet(false, true);
}
private void endRead() {
readCounter.decrementAndGet();
readAllowed.compareAndSet(true, false);
}
让我们看看表现:
Benchmark Mode Cnt Score Error Units
SO_Benchmark.concurrentMap thrpt 10 98,275 ? 2,018 ops/s
SO_Benchmark.doubleBufferMap thrpt 10 80,224 ? 8,993 ops/s
SO_Benchmark.usualMap thrpt 10 106,224 ? 4,205 ops/s
这些结果表明,对于每个操作,只有一个原子计数器和一个原子布尔修改,我们无法获得比ConcurrentHashMap
更好的性能。 (我尝试了30,10和5%的写入,但它从未在DoubleBufferHashMap
中获得更好的性能)