我希望有一个ThreadPoolExecutor
,我可以设置corePoolSize
和maximumPoolSize
,会发生什么,队列会立即将任务交给线程池,从而创建新线程,直到它到达maximumPoolSize
然后开始添加到队列。
有这样的事吗?如果没有,有没有什么好的理由没有这样的策略?
我真正想要的是将任务提交执行,当它达到一个点,在这个点上,由于拥有太多线程(通过设置maximumPoolSize),它实际上会获得“最差”性能,它将停止添加新线程和使用该线程池并开始排队,然后如果队列已满,则拒绝。
当负载恢复时,它可以开始拆除未使用的线程回到corePoolSize。
在我的应用程序中,这比http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/ThreadPoolExecutor.html中列出的“三个一般策略”更有意义
答案 0 :(得分:3)
注意:这些实现有些缺陷且不确定。在使用此代码之前,请阅读完整的答案和评论。
当执行程序低于最大池大小时,如何创建拒绝项目的工作队列,并在达到最大值后开始接受它们?
这取决于记录的行为:
“如果请求无法排队,则会创建一个新线程,除非这样 会超过maximumPoolSize,在这种情况下,任务将是 拒绝。“
public class ExecutorTest
{
private static final int CORE_POOL_SIZE = 2;
private static final int MAXIMUM_POOL_SIZE = 4;
private static final int KEEP_ALIVE_TIME_MS = 5000;
public static void main(String[] args)
{
final SaturateExecutorBlockingQueue workQueue =
new SaturateExecutorBlockingQueue();
final ThreadPoolExecutor executor =
new ThreadPoolExecutor(CORE_POOL_SIZE,
MAXIMUM_POOL_SIZE,
KEEP_ALIVE_TIME_MS,
TimeUnit.MILLISECONDS,
workQueue);
workQueue.setExecutor(executor);
for (int i = 0; i < 6; i++)
{
final int index = i;
executor.submit(new Runnable()
{
public void run()
{
try
{
Thread.sleep(1000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
System.out.println("Runnable " + index
+ " on thread: " + Thread.currentThread());
}
});
}
}
public static class SaturateExecutorBlockingQueue
extends LinkedBlockingQueue<Runnable>
{
private ThreadPoolExecutor executor;
public void setExecutor(ThreadPoolExecutor executor)
{
this.executor = executor;
}
public boolean offer(Runnable e)
{
if (executor.getPoolSize() < executor.getMaximumPoolSize())
{
return false;
}
return super.offer(e);
}
}
}
注意:您的问题让我感到惊讶,因为我希望您希望的行为成为使用corePoolSize&lt;配置的ThreadPoolExecutor的默认行为。 maximumPoolSize。但正如您所指出的,ThreadPoolExecutor的JavaDoc明确指出了其他方式。
创意#2
我认为我的方法可能略胜一筹。它依赖于setCorePoolSize
中ThreadPoolExecutor
方法中编码的副作用行为。我们的想法是在工作项入队时临时和有条件地增加核心池大小。增加核心池大小时,ThreadPoolExecutor
将立即生成足够的新线程来执行所有排队(queue.size())任务。然后我们立即减小核心池大小,这允许线程池在未来低活动期间自然收缩。这种方法仍然不完全确定(例如,池大小可能超过最大池大小),但我认为几乎在所有情况下它都优于第一个策略。
具体来说,我认为这种方法比第一种更好,因为:
-
public class ExecutorTest2
{
private static final int KEEP_ALIVE_TIME_MS = 5000;
private static final int CORE_POOL_SIZE = 2;
private static final int MAXIMUM_POOL_SIZE = 4;
public static void main(String[] args) throws InterruptedException
{
final SaturateExecutorBlockingQueue workQueue =
new SaturateExecutorBlockingQueue(CORE_POOL_SIZE,
MAXIMUM_POOL_SIZE);
final ThreadPoolExecutor executor =
new ThreadPoolExecutor(CORE_POOL_SIZE,
MAXIMUM_POOL_SIZE,
KEEP_ALIVE_TIME_MS,
TimeUnit.MILLISECONDS,
workQueue);
workQueue.setExecutor(executor);
for (int i = 0; i < 60; i++)
{
final int index = i;
executor.submit(new Runnable()
{
public void run()
{
try
{
Thread.sleep(1000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
System.out.println("Runnable " + index
+ " on thread: " + Thread.currentThread()
+ " poolSize: " + executor.getPoolSize());
}
});
}
executor.shutdown();
executor.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
}
public static class SaturateExecutorBlockingQueue
extends LinkedBlockingQueue<Runnable>
{
private final int corePoolSize;
private final int maximumPoolSize;
private ThreadPoolExecutor executor;
public SaturateExecutorBlockingQueue(int corePoolSize,
int maximumPoolSize)
{
this.corePoolSize = corePoolSize;
this.maximumPoolSize = maximumPoolSize;
}
public void setExecutor(ThreadPoolExecutor executor)
{
this.executor = executor;
}
public boolean offer(Runnable e)
{
if (super.offer(e) == false)
{
return false;
}
// Uncomment one or both of the below lines to increase
// the likelyhood of the threadpool reusing an existing thread
// vs. spawning a new one.
//Thread.yield();
//Thread.sleep(0);
int currentPoolSize = executor.getPoolSize();
if (currentPoolSize < maximumPoolSize
&& currentPoolSize >= corePoolSize)
{
executor.setCorePoolSize(currentPoolSize + 1);
executor.setCorePoolSize(corePoolSize);
}
return true;
}
}
}
答案 1 :(得分:2)
我们使用以下代码找到了解决该问题的方法:
此队列是混合的SynchronousQueue / LinkedBlockingQueue。
public class OverflowingSynchronousQueue<E> extends LinkedBlockingQueue<E> {
private static final long serialVersionUID = 1L;
private SynchronousQueue<E> synchronousQueue = new SynchronousQueue<E>();
public OverflowingSynchronousQueue() {
super();
}
public OverflowingSynchronousQueue(int capacity) {
super(capacity);
}
@Override
public boolean offer(E e) {
// Create a new thread or wake an idled thread
return synchronousQueue.offer(e);
}
public boolean offerToOverflowingQueue(E e) {
// Add to queue
return super.offer(e);
}
@Override
public E take() throws InterruptedException {
// Return tasks from queue, if any, without blocking
E task = super.poll();
if (task != null) {
return task;
} else {
// Block on the SynchronousQueue take
return synchronousQueue.take();
}
}
@Override
public E poll(long timeout, TimeUnit unit) throws InterruptedException {
// Return tasks from queue, if any, without blocking
E task = super.poll();
if (task != null) {
return task;
} else {
// Block on the SynchronousQueue poll
return synchronousQueue.poll(timeout, unit);
}
}
}
为了使它工作,我们需要在任务被拒绝时将RejectedExecutionHandler包装成调用“offerToOverflowingQueue”。
public class OverflowingRejectionPolicyAdapter implements RejectedExecutionHandler {
private OverflowingSynchronousQueue<Runnable> queue;
private RejectedExecutionHandler adaptedRejectedExecutionHandler;
public OverflowingRejectionPolicyAdapter(OverflowingSynchronousQueue<Runnable> queue,
RejectedExecutionHandler adaptedRejectedExecutionHandler)
{
super();
this.queue = queue;
this.adaptedRejectedExecutionHandler = adaptedRejectedExecutionHandler;
}
@Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
if (!queue.offerToOverflowingQueue(r)) {
adaptedRejectedExecutionHandler.rejectedExecution(r, executor);
}
}
}
以下是我们创建ThreadPoolExecutor的方法
public static ExecutorService newSaturatingThreadPool(int corePoolSize,
int maxPoolSize,
int maxQueueSize,
long keepAliveTime,
TimeUnit timeUnit,
String threadNamePrefix,
RejectedExecutionHandler rejectedExecutionHandler)
{
OverflowingSynchronousQueue<Runnable> queue = new OverflowingSynchronousQueue<Runnable>(maxQueueSize);
OverflowingRejectionPolicyAdapter rejectionPolicyAdapter = new OverflowingRejectionPolicyAdapter(queue,
rejectedExecutionHandler);
ThreadPoolExecutor executor = new ThreadPoolExecutor(corePoolSize,
maxPoolSize,
keepAliveTime,
timeUnit,
queue,
new NamedThreadFactory(threadNamePrefix),
rejectionPolicyAdapter);
return executor;
}