我有一个制作人和许多消费者。
使用更简单的算法可以获得相同的结果吗?使用可重入锁嵌套同步块似乎有点不自然。 你可能会注意到任何竞争条件吗?
更新:我找到的第二个解决方案是使用3个集合。一个缓存生产者结果,第二个是阻塞队列,第三个使用列表来跟踪正在进行的任务。再次有点复杂。
我的代码版本
import java.util.*;
import java.util.concurrent.*;
import java.util.concurrent.locks.ReentrantLock;
public class Main1 {
static class Token {
private int order;
private String value;
Token() {
}
Token(int o, String v) {
order = o;
value = v;
}
int getOrder() {
return order;
}
String getValue() {
return value;
}
}
private final static BlockingQueue<Token> queue = new ArrayBlockingQueue<Token>(10);
private final static ConcurrentMap<String, Object> locks = new ConcurrentHashMap<String, Object>();
private final static ReentrantLock reentrantLock = new ReentrantLock();
private final static Token STOP_TOKEN = new Token();
private final static List<String> lockList = Collections.synchronizedList(new ArrayList<String>());
public static void main(String[] args) {
ExecutorService producerExecutor = Executors.newSingleThreadExecutor();
producerExecutor.submit(new Runnable() {
public void run() {
Random random = new Random();
try {
for (int i = 1; i <= 100; i++) {
Token token = new Token(i, String.valueOf(random.nextInt(1)));
queue.put(token);
}
queue.put(STOP_TOKEN);
}catch(InterruptedException e){
e.printStackTrace();
}
}
});
ExecutorService consumerExecutor = Executors.newFixedThreadPool(10);
for(int i=1; i<=10;i++) {
// creating to many runnable would be inefficient because of this complex not thread safe object
final Object dependecy = new Object(); //new ComplexDependecy()
consumerExecutor.submit(new Runnable() {
public void run() {
while(true) {
try {
//not in order
Token token = queue.take();
if (token == STOP_TOKEN) {
queue.add(STOP_TOKEN);
return;
}
System.out.println("Task start" + Thread.currentThread().getId() + " order " + token.getOrder());
Random random = new Random();
Thread.sleep(random.nextInt(200)); //doLongRunningTask(dependecy)
lockList.remove(token.getValue());
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}});
}
}}
答案 0 :(得分:6)
您可以预先创建一组Runnables
,它将选择传入的任务(令牌)并根据其顺序值将它们放入队列中。
正如评论中指出的那样,不保证具有不同值的令牌将始终并行执行(总而言之,您至少受到物理核心数的限制)在你的盒子里)。但是,保证具有相同订单的令牌将按到达顺序执行。
示例代码:
/**
* Executor which ensures incoming tasks are executed in queues according to provided key (see {@link Task#getOrder()}).
*/
public class TasksOrderingExecutor {
public interface Task extends Runnable {
/**
* @return ordering value which will be used to sequence tasks with the same value.<br>
* Tasks with different ordering values <i>may</i> be executed in parallel, but not guaranteed to.
*/
String getOrder();
}
private static class Worker implements Runnable {
private final LinkedBlockingQueue<Task> tasks = new LinkedBlockingQueue<>();
private volatile boolean stopped;
void schedule(Task task) {
tasks.add(task);
}
void stop() {
stopped = true;
}
@Override
public void run() {
while (!stopped) {
try {
Task task = tasks.take();
task.run();
} catch (InterruptedException ie) {
// perhaps, handle somehow
}
}
}
}
private final Worker[] workers;
private final ExecutorService executorService;
/**
* @param queuesNr nr of concurrent task queues
*/
public TasksOrderingExecutor(int queuesNr) {
Preconditions.checkArgument(queuesNr >= 1, "queuesNr >= 1");
executorService = new ThreadPoolExecutor(queuesNr, queuesNr, 0, TimeUnit.SECONDS, new SynchronousQueue<>());
workers = new Worker[queuesNr];
for (int i = 0; i < queuesNr; i++) {
Worker worker = new Worker();
executorService.submit(worker);
workers[i] = worker;
}
}
public void submit(Task task) {
Worker worker = getWorker(task);
worker.schedule(task);
}
public void stop() {
for (Worker w : workers) w.stop();
executorService.shutdown();
}
private Worker getWorker(Task task) {
return workers[task.getOrder().hashCode() % workers.length];
}
}
答案 1 :(得分:6)
根据你的代码的性质,唯一的方法来保证令牌与 以串行方式处理相同的值是等待STOP_TOKEN到达。
您需要单一的生产者 - 单一消费者设置,以及消费者收集和分类 令牌的价值(进入Multimap,比方说)。
只有这样,您才能知道哪些令牌可以串行处理,哪些令牌可以并行处理。
无论如何,我建议你看看LMAX Disruptor,这是在线程之间共享数据的非常有效的方法。
它不会受到同步开销的影响,因为Executors是无锁的(根据数据处理的性质,这可能会给您带来不错的性能优势)。
// single thread for processing as there will be only on consumer
Disruptor<InEvent> inboundDisruptor = new Disruptor<>(InEvent::new, 32, Executors.newSingleThreadExecutor());
// outbound disruptor that uses 3 threads for event processing
Disruptor<OutEvent> outboundDisruptor = new Disruptor<>(OutEvent::new, 32, Executors.newFixedThreadPool(3));
inboundDisruptor.handleEventsWith(new InEventHandler(outboundDisruptor));
// setup 3 event handlers, doing round robin consuming, effectively processing OutEvents in 3 threads
outboundDisruptor.handleEventsWith(new OutEventHandler(0, 3, new Object()));
outboundDisruptor.handleEventsWith(new OutEventHandler(1, 3, new Object()));
outboundDisruptor.handleEventsWith(new OutEventHandler(2, 3, new Object()));
inboundDisruptor.start();
outboundDisruptor.start();
// publisher code
for (int i = 0; i < 10; i++) {
inboundDisruptor.publishEvent(InEventTranslator.INSTANCE, new Token());
}
入站破坏程序上的事件处理程序只是收集传入的令牌。收到STOP令牌后,它会将一系列令牌发布到出站扰乱器以进行进一步处理:
public class InEventHandler implements EventHandler<InEvent> {
private ListMultimap<String, Token> tokensByValue = ArrayListMultimap.create();
private Disruptor<OutEvent> outboundDisruptor;
public InEventHandler(Disruptor<OutEvent> outboundDisruptor) {
this.outboundDisruptor = outboundDisruptor;
}
@Override
public void onEvent(InEvent event, long sequence, boolean endOfBatch) throws Exception {
if (event.token == STOP_TOKEN) {
// publish indexed tokens to outbound disruptor for parallel processing
tokensByValue.asMap().entrySet().stream().forEach(entry -> outboundDisruptor.publishEvent(OutEventTranslator.INSTANCE, entry.getValue()));
} else {
tokensByValue.put(event.token.value, event.token);
}
}
}
出站事件处理程序按顺序处理相同值的标记:
public class OutEventHandler implements EventHandler<OutEvent> {
private final long order;
private final long allHandlersCount;
private Object yourComplexDependency;
public OutEventHandler(long order, long allHandlersCount, Object yourComplexDependency) {
this.order = order;
this.allHandlersCount = allHandlersCount;
this.yourComplexDependency = yourComplexDependency;
}
@Override
public void onEvent(OutEvent event, long sequence, boolean endOfBatch) throws Exception {
if (sequence % allHandlersCount != order ) {
// round robin, do not consume every event to allow parallel processing
return;
}
for (Token token : event.tokensToProcessSerially) {
// do procesing of the token using your complex class
}
}
}
其他必需的基础架构(Disruptor docs中描述的目的):
public class InEventTranslator implements EventTranslatorOneArg<InEvent, Token> {
public static final InEventTranslator INSTANCE = new InEventTranslator();
@Override
public void translateTo(InEvent event, long sequence, Token arg0) {
event.token = arg0;
}
}
public class OutEventTranslator implements EventTranslatorOneArg<OutEvent, Collection<Token>> {
public static final OutEventTranslator INSTANCE = new OutEventTranslator();
@Override
public void translateTo(OutEvent event, long sequence, Collection<Token> tokens) {
event.tokensToProcessSerially = tokens;
}
}
public class InEvent {
// Note that no synchronization is used here,
// even though the field is used among multiple threads.
// Memory barrier used by Disruptor guarantee changes are visible.
public Token token;
}
public class OutEvent {
// ... again, no locks.
public Collection<Token> tokensToProcessSerially;
}
public class Token {
String value;
}
答案 2 :(得分:5)
如果你有许多不同的令牌,那么最简单的解决方案是创建一些单线程执行器(大约是你的内核数量的2倍),然后将每个任务分配给由其令牌的哈希确定的执行器。
这样,具有相同令牌的所有任务将转到相同的执行器并按顺序执行,因为每个执行程序只有一个线程。
如果您对调度公平性有一些未说明的要求,那么通过让生产者线程在分发它们之前排队其请求(或阻止)来避免任何显着的不平衡是很容易的,直到有少于10个请求为止。每位遗嘱执行人员。
答案 3 :(得分:4)
以下解决方案将仅使用生产者和消费者用于按顺序处理每个订单号的订单的单个Map,同时并行处理不同的订单号。这是代码:
public class Main {
private static final int NUMBER_OF_CONSUMER_THREADS = 10;
private static volatile int sync = 0;
public static void main(String[] args) {
final ConcurrentHashMap<String,Controller> queues = new ConcurrentHashMap<String, Controller>();
final CountDownLatch latch = new CountDownLatch(NUMBER_OF_CONSUMER_THREADS);
final AtomicBoolean done = new AtomicBoolean(false);
// Create a Producer
new Thread() {
{
this.setDaemon(true);
this.setName("Producer");
this.start();
}
public void run() {
Random rand = new Random();
for(int i =0 ; i < 1000 ; i++) {
int order = rand.nextInt(20);
String key = String.valueOf(order);
String value = String.valueOf(rand.nextInt());
Controller controller = queues.get(key);
if (controller == null) {
controller = new Controller();
queues.put(key, controller);
}
controller.add(new Token(order, value));
Main.sync++;
}
done.set(true);
}
};
while (queues.size() < 10) {
try {
// Allow the producer to generate several entries that need to
// be processed.
Thread.sleep(5000);
} catch (InterruptedException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
}
// System.out.println(queues);
// Create the Consumers
ExecutorService consumers = Executors.newFixedThreadPool(NUMBER_OF_CONSUMER_THREADS);
for(int i = 0 ; i < NUMBER_OF_CONSUMER_THREADS ; i++) {
consumers.submit(new Runnable() {
private Random rand = new Random();
public void run() {
String name = Thread.currentThread().getName();
try {
boolean one_last_time = false;
while (true) {
for (Map.Entry<String, Controller> entry : queues.entrySet()) {
Controller controller = entry.getValue();
if (controller.lock(this)) {
ConcurrentLinkedQueue<Token> list = controller.getList();
Token token;
while ((token = list.poll()) != null) {
try {
System.out.println(name + " processing order: " + token.getOrder()
+ " value: " + token.getValue());
Thread.sleep(rand.nextInt(200));
} catch (InterruptedException e) {
}
}
int last = Main.sync;
queues.remove(entry.getKey());
while(done.get() == false && last == Main.sync) {
// yield until the producer has added at least another entry
Thread.yield();
}
// Purge any new entries added
while ((token = list.poll()) != null) {
try {
System.out.println(name + " processing order: " + token.getOrder()
+ " value: " + token.getValue());
Thread.sleep(200);
} catch (InterruptedException e) {
}
}
controller.unlock(this);
}
}
if (one_last_time) {
return;
}
if (done.get()) {
one_last_time = true;
}
}
} finally {
latch.countDown();
}
}
});
}
try {
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
consumers.shutdown();
System.out.println("Exiting.. remaining number of entries: " + queues.size());
}
}
请注意,Main类包含一个Map的队列实例。映射键是您希望消费者按顺序处理的订单ID。该值是一个Controller类,它将包含与该订单ID相关联的所有订单。
生产者将生成订单并将订单(令牌)添加到其关联的Controller。消费者将对队列映射值进行迭代,并调用Controller锁定方法以确定它是否可以处理该特定订单ID的订单。如果锁返回false,它将检查下一个Controller实例。如果锁定返回true,它将处理所有订单,然后检查下一个Controller。
已更新添加了同步整数,用于保证从队列映射中删除Controller的实例时。它的所有条目都将被消费。消费者代码中存在逻辑错误,其中很快就会调用unlock方法。
令牌类与您在此处发布的类似。
class Token {
private int order;
private String value;
Token(int order, String value) {
this.order = order;
this.value = value;
}
int getOrder() {
return order;
}
String getValue() {
return value;
}
@Override
public String toString() {
return "Token [order=" + order + ", value=" + value + "]\n";
}
}
以下的Controller类用于确保线程池中只有一个线程处理订单。锁定/解锁方法用于确定允许哪些线程处理订单。
class Controller {
private ConcurrentLinkedQueue<Token> tokens = new ConcurrentLinkedQueue<Token>();
private ReentrantLock lock = new ReentrantLock();
private Runnable current = null;
void add(Token token) {
tokens.add(token);
}
public ConcurrentLinkedQueue<Token> getList() {
return tokens;
}
public void unlock(Runnable runnable) {
lock.lock();
try {
if (current == runnable) {
current = null;
}
} finally {
lock.unlock();
}
}
public boolean lock(Runnable runnable) {
lock.lock();
try {
if (current == null) {
current = runnable;
}
} finally {
lock.unlock();
}
return current == runnable;
}
@Override
public String toString() {
return "Controller [tokens=" + tokens + "]";
}
}
有关实施的其他信息。它使用CountDownLatch确保在流程退出之前处理所有生成的订单。 done变量就像你的STOP_TOKEN变量一样。
该实现确实包含您需要解决的问题。存在的问题是,在处理完所有订单后,它不会清除控制器的订单ID。这将导致线程池中的线程被分配给不包含订单的控制器的实例。这将浪费可用于执行其他任务的cpu周期。
答案 4 :(得分:4)
您需要的是确保同时处理具有相同值的令牌吗?你的代码太乱了,无法理解你的意思(它没有编译,并且有很多未使用的变量,锁和地图,它们是创建但从未使用过的)。看起来你非常想过这个。您只需要一个队列和一个地图。 我想象的是这样的事情:
class Consumer implements Runnable {
ConcurrentHashMap<String, Token> inProcess;
BlockingQueue<Token> queue;
public void run() {
Token token = null;
while ((token = queue.take()) != null) {
if(inProcess.putIfAbsent(token.getValue(), token) != null) {
queue.put(token);
continue;
}
processToken(token);
inProcess.remove(token.getValue());
}
}
}
答案 5 :(得分:3)
具有相同值的令牌需要按顺序处理
确保任何两件事顺序发生的方法是在同一个线程中进行。
我有很多工作线程的集合,我有一个Map。每当我得到一个我以前没见过的令牌时,我会随机选择一个线程,并将令牌和线程输入到地图中。从那时起,我将使用相同的线程来执行与该令牌相关的任务。
创建新的Runnables将非常昂贵
Runnable
是一个界面。创建实现 Runnable
的新对象并不比创建任何其他类型的对象贵得多。
答案 6 :(得分:3)
我不完全确定我已经理解了这个问题,但是我会尝试一种算法。
演员是:
queue
个任务pool
免费executors
set
in-process
个令牌controller
然后,
最初所有executors
均可用且set
为空
controller
选择一个可用的executor
并通过queue
查找task
,其中包含一个不在in-process set
中的令牌当它找到它时
in-process
集executor
来处理task
和 executor
在完成处理后从set
删除令牌并将其自身添加回池中
答案 7 :(得分:3)
这样做的一种方法是具有一个用于序列处理的执行器和一个用于并行处理的执行器。我们还需要一个单线程管理器服务,该服务将决定需要提交哪些服务令牌进行处理。 //队列由两个线程共享。包含生产者生产的代币 BlockingQueue tokenList = new ArrayBlockingQueue(10);
private void startProcess() {
ExecutorService producer = Executors.newSingleThreadExecutor();
final ExecutorService consumerForSequence = Executors
.newSingleThreadExecutor();
final ExecutorService consumerForParallel = Executors.newFixedThreadPool(10);
ExecutorService manager = Executors.newSingleThreadExecutor();
producer.submit(new Producer(tokenList));
manager.submit(new Runnable() {
public void run() {
try {
while (true) {
Token t = tokenList.take();
System.out.println("consumed- " + t.orderid
+ " element");
if (t.orderid % 7 == 0) { // any condition to check for sequence processing
consumerForSequence.submit(new ConsumerForSequenceProcess(t));
} else {
ConsumerForParallel.submit(new ConsumerForParallelProcess(t));
}
}
}
catch (InterruptedException e) { // TODO Auto-generated catch
// block
e.printStackTrace();
}
}
});
}
答案 8 :(得分:2)
我认为这个任务背后隐藏着一个更基本的设计问题,但还可以。如果您想要按顺序执行,或者您只是希望单个令牌描述的任务上的操作是原子/事务性的,那么我无法从您的问题描述中找出答案。我在下面提出的建议更像是对这个问题的“快速修复”而不是真正的解决方案。
对于真正的“有序执行”案例,我提出了一个基于队列代理的解决方案,它对输出进行排序:
定义Queue的实现,它提供一个生成代理队列的工厂方法,该代理队列由该单个队列对象表示给生产者端;工厂方法还应该注册这些代理队列对象。如果元素与其中一个输出队列中的某个元素匹配,则将元素添加到输入队列中应将其直接添加到其中一个输出队列中。否则将其添加到任何(最短的)输出队列。 (有效地实施检查)。或者(略胜一筹):添加元素时不要这样做,但任何输出队列运行时都不要这样做。
为每个可运行的使用者提供一个存储单个Queue接口的字段(而不是访问单个对象)。通过上面定义的工厂方法初始化此字段。
对于事务情况,我认为跨越更多线程比使用内核更容易(使用统计信息来计算),并在较低(对象)级别实现阻塞机制。