I'm trying to create concurrent bit set in java, which allows size extension (as opposed to fixed-length, with is quite trivial). Here is a core part of the class (other methods are not important right now)
public class ConcurrentBitSet {
static final int CELL = 32;
static final int MASK = CELL - 1;
final AtomicReference<AtomicIntegerArray> aiaRef;
public ConcurrentBitSet(int initialBitSize) {
aiaRef = new AtomicReference<>(new AtomicIntegerArray(1 + ((initialBitSize - 1) / CELL)));
}
public boolean get(int bit) {
int cell = bit / CELL;
if (cell >= aiaRef.get().length()) {
return false;
}
int mask = 1 << (bit & MASK);
return (aiaRef.get().get(cell) & mask) != 0;
}
public void set(int bit) {
int cell = bit / CELL;
int mask = 1 << (bit & MASK);
while (true) {
AtomicIntegerArray old = aiaRef.get();
AtomicIntegerArray v = extend(old, cell);
v.getAndAccumulate(cell, mask, (prev, m) -> prev | m);
if (aiaRef.compareAndSet(old, v)) {
break;
}
}
}
private AtomicIntegerArray extend(AtomicIntegerArray old, int cell) {
AtomicIntegerArray v = old;
if (cell >= v.length()) {
v = new AtomicIntegerArray(cell + 1);
for (int i = 0; i < old.length(); i++) {
v.set(i, old.get(i));
}
}
return v;
}
public String toString() {
StringBuilder sb = new StringBuilder();
for (int i = 0; i < aiaRef.get().length(); i++) {
for (int b = 0; b < CELL; b++) {
sb.append(get(i * CELL + b) ? '1' : '0');
}
}
return sb.toString();
}
}
Unfortunately, it seems that there is a race condition here.
Here is sample test code, which managed to fail for few bits each time - it should print all ones up till bit 300, but there are few random zeroes there, in different place each time. One one PC I'm getting just few, on other there are 8-10 zeroes on odd/even positions in row.
final ConcurrentBitSet cbs = new ConcurrentBitSet(10);
CountDownLatch latch = new CountDownLatch(1);
new Thread() {
public void run() {
try {
latch.await();
for (int i = 0; i < 300; i += 2) {
cbs.set(i);
}
} catch (InterruptedException e) {
}
};
}.start();
new Thread() {
public void run() {
try {
latch.await();
for (int i = 0; i < 300; i += 2) {
cbs.set(i + 1);
}
} catch (InterruptedException e) {
}
};
}.start();
latch.countDown();
Thread.sleep(1000);
System.out.println(cbs.toString());
Example of what I'm getting
11111111111111111111111111111111111111111111111111111101111111111111111111111111011111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111101111111111111111111111111111111111111111111111111111111111111011111111111111111111111111111111111111111111111111100000000000000000000
11111111111111111111111111111111111111110111111111111111111111110101111111111111111111110101011111111111111111111111111111111111010101111111111111111111111111111111111101010111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111100000000000000000000
It is hard to debug (as anything complex tends to make race condition disappear), but it looks like at the point where two threads try to extend size of the array at same time, one which fails and has to retry while loop ends up with corrupted data from aiaRef.get() on next part of the loop - part which it already has visited (as had other thread if they are trying to extend it) ends up having some zeroes inside.
Anybody has an idea where the bug is?
答案 0 :(得分:4)
The problem is that aiaRef.compareAndSet()
is only guarding against a concurrent replacement, because AtomicReference
's only job is to protect the atomicity of its reference. If the referenced object is concurrently modified while we're rebuilding the array, compareAndSet()
will succeed because it's comparing the same reference against itself, but it may have missed some modifications.