我开发了一个队列,允许单个消费者和生产者同时从队列中提供/轮询元素,而不对每个商品/调查进行同步或CAS操作。相反,当队列的尾部为空时,只需要一个原子操作。 此队列旨在减少队列缓冲且消费者无法赶上生产者的情况下的延迟。
在我要审核实施的问题中(代码尚未经过其他任何人审核,因此很难获得第二意见)并讨论我认为应该显着减少延迟的使用模式,以及这种架构是否可能比LMAX干扰器运行得更快。
/*
* Copyright 2014 Aran Hakki
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package concurrency.messaging;
// A non-blocking queue which allows concurrent offer and poll operations with minimal contention.
// Contention in offer and poll operations only occurs when offerArray, which acts as an incomming message buffer,
// becomes full, and we must wait for it too be swapped with the pollArray, acting as a outgoing message buffer,
// the most simple analogy would be too imaging two buckets, one we fill and at the same time we empty another bucket
// which already contains some liquid, then at the point the initial bucket becomes full, we swap it with the bucket that
// is being emptied.
// It's possible that this mechanism might be faster than the LMAX disruptor, need to create tests to confirm.
public final class ConcurrentPollOfferArrayQueue<T> {
private T[] offerArray;
private T[] pollArray;
public ConcurrentPollOfferArrayQueue(T[] _pollArray){
offerArray = (T[]) new Object[_pollArray.length];
pollArray = _pollArray;
}
private int offerIndex = 0;
private int pollIndex = 0;
public void offer(T t){
if (offerIndex<offerArray.length){
offerArray[offerIndex] = t;
offerIndex++;
} else {
while(!arrayFlipped){
}
arrayFlipped = false;
offerIndex = 0;
offer(t);
}
}
private volatile boolean arrayFlipped = false;
public T poll(){
if (pollIndex<pollArray.length){
T t = pollArray[pollIndex];
pollArray[pollIndex] = null;
pollIndex++;
return t;
} else {
pollIndex = 0;
T[] pollArrayTmp = pollArray;
pollArray = offerArray;
offerArray = pollArrayTmp;
arrayFlipped = true;
return poll();
}
}
}
通过使用这些队列中的许多队列来代替多个生产者和消费者都引用相同的队列,我认为延迟可以显着减少。
考虑生产者A,B,C都引用单个队列Q,而消费者E,E和F都引用相同的队列。这会导致以下一系列关系,从而产生很多争用:
写一个问题
B写到Q
C writesTo Q
E writesTo Q
D writesTo Q
F writesTo Q
使用我开发的队列可以在每个生产者和单个消费者聚合线程之间有一个队列,该线程将获取每个生成器队列的尾部元素并将它们放在消费者队列的头部。这将显着减少争用,因为我们只有一个写入器到一段内存。现在关系如下:
写到头部(AQ)
B写到headOf(BQ)
C writeTo headOf(CQ)
ConsumerAggregationThread写入tailOf(AQ)
ConsumerAggregationThread写入tailOf(BQ)
ConsumerAggregationThread写入tailOf(CQ)
ConsumerAggregationThread写入headOf(EQ)
ConsumerAggregationThread写入headOf(FQ)
ConsumerAggregationThread写入headOf(GQ)
E writesTo tailOf(EQ)
F writesTo tailOf(FQ)
G写到tailOf(GQ)
上述关系确保了单一作家原则。
我很想听听你的想法。
答案 0 :(得分:0)
你们对这个实现有什么看法?我已将其更改为圆形,以便轮询线程在pollQueue为空时触发队列切换。
/*
* Copyright 2014 Aran Hakki
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* A non-blocking queue which allows concurrent offer and poll operations with minimal contention.
* Contention in offer and poll operations only occurs when pollQueue is empty and must be swapped with offer queue.
* This implementation does not make use of any low level Java memory optimizations e.g. using the Unsafe class or direct byte buffers,
* so its possible it could run much faster.
* If re-engineered to use lower level features its possible that this approach might be faster than the LMAX disruptor.
* I'm current observing an average latency of approx 6000ns.
*/
package concurrency.messaging;
import java.util.LinkedList;
import java.util.Queue;
import java.util.concurrent.atomic.AtomicInteger;
public class ConcurrentPollOfferQueue<T> {
private class ThreadSafeSizeQueue<T> {
private Queue<T> queue = new LinkedList<T>();
private volatile AtomicInteger size = new AtomicInteger(0);
public int size(){
return size.get();
}
public void offer(T value){
queue.offer(value);
size.incrementAndGet();
}
public T poll(){
T value = queue.poll();
if (value!=null){
size.decrementAndGet();
}
return value;
}
}
private volatile ThreadSafeSizeQueue<T> offerQueue;
private volatile ThreadSafeSizeQueue<T> pollQueue;
private int capacity;
public ConcurrentPollOfferQueue(int capacity){
this.capacity = capacity;
offerQueue = new ThreadSafeSizeQueue<T>();
pollQueue = new ThreadSafeSizeQueue<T>();
}
public void offer(T value){
while(offerQueue.size()==capacity){/* wait for consumer to finishing consuming pollQueue */}
offerQueue.offer(value);
}
public T poll(){
T polled;
while((polled = pollQueue.poll())==null){
if (pollQueue.size()==0){
ThreadSafeSizeQueue<T> tmpQueue = offerQueue;
offerQueue = pollQueue;
pollQueue = tmpQueue;
}
}
return polled;
}
public boolean isEmpty(){
return pollQueue.size()==0;
}