我想要一个调度程序线程,它执行并从工作线程池中检索结果。调度员需要不断地向工作线程提供工作。当任何工作线程完成时,调度程序需要收集其结果并重新分派(或创建新的)工作线程。在我看来,这应该是显而易见的,但我一直无法找到合适模式的例子。 Thread.join()
循环是不合适的,因为它实际上是“AND”逻辑,我正在寻找“OR”逻辑。
我能想到的最好的方法是拥有调度程序线程wait()
并在完成后拥有工作线程notify()
。虽然看起来我必须防止两个工作线程同时结束导致调度程序线程错过notify()
。另外,这对我来说似乎有点不雅观。
调度程序线程周期性地唤醒并轮询工作线程池并检查每个线程以查看它是否已通过isAlive()
完成的想法更为优雅。
我看了一眼java.util.concurrent
并没有看到任何看起来像这种模式的东西。
我觉得实现我上面提到的内容会涉及很多防御性编程并重新发明轮子。必须有一些我缺少的东西。我可以利用什么来实现这种模式?
这是单线程版本。 putMissingToS3()
将成为调度程序线程,uploadFileToBucket()
中表示的功能将成为工作线程。
private void putMissingToS3()
{
int reqFilesToUpload = 0;
long reqSizeToUpload = 0L;
int totFilesUploaded = 0;
long totSizeUploaded = 0L;
int totFilesSkipped = 0;
long totSizeSkipped = 0L;
int rptLastFilesUploaded = 0;
long rptSizeInterval = 1000000000L;
long rptLastSize = 0L;
StopWatch rptTimer = new StopWatch();
long rptLastMs = 0L;
StopWatch globalTimer = new StopWatch();
StopWatch indvTimer = new StopWatch();
for (FileSystemRecord fsRec : fileSystemState.toList())
{
String reqKey = PathConverter.pathToKey(PathConverter.makeRelativePath(fileSystemState.getRootPath(), fsRec.getFullpath()));
LocalS3MetadataRecord s3Rec = s3Metadata.getRecord(reqKey);
// Just get a rough estimate of what the size of this upload will be
if (s3Rec == null)
{
++reqFilesToUpload;
reqSizeToUpload += fsRec.getSize();
}
}
long uploadTimeGuessMs = (long)((double)reqSizeToUpload/estUploadRateBPS*1000.0);
printAndLog("Estimated upload: " + natFmt.format(reqFilesToUpload) + " files, " + Utils.readableFileSize(reqSizeToUpload) +
", Estimated time " + Utils.readableElapsedTime(uploadTimeGuessMs));
globalTimer.start();
rptTimer.start();
for (FileSystemRecord fsRec : fileSystemState.toList())
{
String reqKey = PathConverter.pathToKey(PathConverter.makeRelativePath(fileSystemState.getRootPath(), fsRec.getFullpath()));
if (PathConverter.validate(reqKey))
{
LocalS3MetadataRecord s3Rec = s3Metadata.getRecord(reqKey);
//TODO compare and deal with size mismatches. Maybe go and look at last-mod dates.
if (s3Rec == null)
{
indvTimer.start();
uploadFileToBucket(s3, syncParms.getS3Bucket(), fsRec.getFullpath(), reqKey);
indvTimer.stop();
++totFilesUploaded;
totSizeUploaded += fsRec.getSize();
logOnly("Uploaded: Size=" + fsRec.getSize() + ", " + indvTimer.stopDeltaMs() + " ms, File=" + fsRec.getFullpath() + ", toKey=" + reqKey);
if (totSizeUploaded > rptLastSize + rptSizeInterval)
{
long invSizeUploaded = totSizeUploaded - rptLastSize;
long nowMs = rptTimer.intervalMs();
long invElapMs = nowMs - rptLastMs;
long remSize = reqSizeToUpload - totSizeUploaded;
double progessPct = (double)totSizeUploaded/reqSizeToUpload*100.0;
double mbps = (invElapMs > 0) ? invSizeUploaded/1e6/(invElapMs/1000.0) : 0.0;
long remMs = (long)((double)remSize/((double)invSizeUploaded/invElapMs));
printOnly("Progress: " + d2Fmt.format(progessPct) + "%, " + Utils.readableFileSize(totSizeUploaded) + " of " +
Utils.readableFileSize(reqSizeToUpload) + ", Rate " + d3Fmt.format(mbps) + " MB/s, " +
"Time rem " + Utils.readableElapsedTime(remMs));
rptLastMs = nowMs;
rptLastFilesUploaded = totFilesUploaded;
rptLastSize = totSizeUploaded;
}
}
}
else
{
++totFilesSkipped;
totSizeSkipped += fsRec.getSize();
logOnly("Skipped (Invalid chars): Size=" + fsRec.getSize() + ", " + fsRec.getFullpath() + ", toKey=" + reqKey);
}
}
globalTimer.stop();
double mbps = 0.0;
if (globalTimer.stopDeltaMs() > 0)
mbps = totSizeUploaded/1e6/(globalTimer.stopDeltaMs()/1000.0);
printAndLog("Actual upload: " + natFmt.format(totFilesUploaded) + " files, " + Utils.readableFileSize(totSizeUploaded) +
", Time " + Utils.readableElapsedTime(globalTimer.stopDeltaMs()) + ", Rate " + d3Fmt.format(mbps) + " MB/s");
if (totFilesSkipped > 0)
printAndLog("Skipped Files: " + natFmt.format(totFilesSkipped) + " files, " + Utils.readableFileSize(totSizeSkipped));
}
private void uploadFileToBucket(AmazonS3 amazonS3, String bucketName, String filePath, String fileKey)
{
File inFile = new File(filePath);
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.addUserMetadata(Const.LAST_MOD_KEY, Long.toString(inFile.lastModified()));
objectMetadata.setLastModified(new Date(inFile.lastModified()));
PutObjectRequest por = new PutObjectRequest(bucketName, fileKey, inFile).withMetadata(objectMetadata);
// Amazon S3 never stores partial objects; if during this call an exception wasn't thrown, the entire object was stored.
amazonS3.putObject(por);
}
答案 0 :(得分:1)
我认为你是正确的包装。您应该使用ExecutorService API。 这消除了等待和观看线程通知的负担。 例如:
import java.util.concurrent.ExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.Executors;
public class ExecutorEx{
static class ThreadA implements Runnable{
int id;
public ThreadA(int id){
this.id = id;
}
public void run(){
//To simulate some work
try{Thread.sleep(Math.round(Math.random()*100));}catch(Exception e){}
// to show message
System.out.println(this.id + "--Test Message" + System.currentTimeMillis());
}
}
public static void main(String args[]) throws Exception{
int poolSize = 10;
ExecutorService pool = Executors.newFixedThreadPool(poolSize);
int i=0;
while(i<100){
pool.submit(new ThreadA(i));
i++;
}
pool.shutdown();
while(!pool.isTerminated()){
pool.awaitTermination(60, TimeUnit.SECONDS);
}
}
}
如果你想从你的线程返回一些东西需要实现Callable而不是Runnable(call()而不是run())并收集Future对象数组中的返回值,你可以稍后迭代。