我有一项任务是在并行线程中执行一些与集合相关的逻辑,并将其与单线程模式进行比较。从这个问题multithreading to read a file in Java我发现文件读取不是多线程的任务,所以我决定专注于进一步的逻辑。逻辑如下:
public List<?> taskExecution(File file, boolean parallel) {
List<Entry<String, Integer>> entryList = new ArrayList<>();
try {
if (parallel) {
entryList = taskExecutionInParallel(file);
} else {
// put in the map the words and their occurrence
Map<String, Integer> wordsFrequency = new HashMap<>();
for(String word : this.readWordsFromText(file, parallel)) {
if (wordsFrequency.containsKey(word)) {
wordsFrequency.put(word, wordsFrequency.get(word).intValue() + 1);
} else {
wordsFrequency.put(word, 1);
}
}
// create the list of Map.Entry objects
entryList.addAll(wordsFrequency.entrySet());
// sort the entries by the value descending
Collections.sort(entryList, new Comparator<Entry<String, Integer>>(){
@Override
public int compare(Entry<String, Integer> o1,
Entry<String, Integer> o2) {
return o2.getValue().compareTo(o1.getValue());
}
});
// identify the top index
int topIndex = entryList.size() > 1 ? 2 : entryList.size() > 0 ? 1 : 0;
// truncate the list
entryList = entryList.subList(0, topIndex);
// sort the result list by the words descending
Collections.sort(entryList, new Comparator<Entry<String, Integer>>(){
@Override
public int compare(Entry<String, Integer> o1,
Entry<String, Integer> o2) {
return o2.getKey().compareTo(o1.getKey());
}
});
}
} catch (IOException e) {
e.printStackTrace();
}
return entryList;
}
我尝试执行从初始单词列表到地图的转换,其中包含单词&#39;使用Fork / Join框架的频率:
class ForkJoinFrequencyReader extends RecursiveAction {
static final int SEQUENTIAL_THRESHOLD = 1000;
private static final long serialVersionUID = -7784403215745552735L;
private Map<String, Integer> wordsFrequency;
private final int start;
private final int end;
private final List<String> words;
public ForkJoinFrequencyReader(List<String> words, Map<String, Integer> wordsFrequency) {
this(words, 0, words.size(), wordsFrequency);
}
private ForkJoinFrequencyReader(List<String> words, int start, int end, Map<String, Integer> wordsFrequency) {
this.words = words;
this.start = start;
this.end = end;
this.wordsFrequency = wordsFrequency;
}
private synchronized void putInMap() {
for(int i = start; i < end; i++) {
String word = words.get(i);
if (wordsFrequency.containsKey(word)) {
wordsFrequency.put(word, wordsFrequency.get(word).intValue() + 1);
} else {
wordsFrequency.put(word, 1);
}
}
}
@Override
protected void compute() {
if (end - start < SEQUENTIAL_THRESHOLD) {
putInMap();
} else {
int mid = (start + end) >>> 1;
ForkJoinFrequencyReader left = new ForkJoinFrequencyReader(words, start, mid, wordsFrequency);
ForkJoinFrequencyReader right = new ForkJoinFrequencyReader(words, mid, end, wordsFrequency);
left.fork();
right.fork();
left.join();
right.join();
}
}
}
private List<Entry<String, Integer>> taskExecutionInParallel(File file) throws IOException {
List<Entry<String, Integer>> entryList = new CopyOnWriteArrayList<>();
ForkJoinPool pool = new ForkJoinPool();
Map<String, Integer> wordsFrequency = new ConcurrentHashMap<>();
pool.invoke(new ForkJoinFrequencyReader(Collections.synchronizedList(this.readWordsFromText(file, true)), wordsFrequency));
//****** .... the same single-thread code yet
}
但是,结果映射在每次执行后都有不同的值。有人能指出我的瓶颈在哪里或提出一些其他解决方案来使用标准JDK嵌入并发到版本7?
答案 0 :(得分:1)
我还为字频块实现了生产者 - 消费者模式:
private Map<String, Integer> frequencyCounterInParallel(File file) throws InterruptedException {
Map<String, Integer> wordsFrequency = Collections.synchronizedMap(new LinkedHashMap<>());
BlockingQueue<String> queue = new ArrayBlockingQueue<>(1024);
Thread producer = new Thread(new Producer(queue, file));
Thread consumer = new Thread(new Consumer(queue, wordsFrequency));
producer.start();
consumer.start();
producer.join();
consumer.join();
return wordsFrequency;
}
class Producer implements Runnable {
private BlockingQueue<String> queue;
private File file;
public Producer(BlockingQueue<String> queue, File file) {
this.file = file;
this.queue = queue;
}
@Override
public void run() {
try(BufferedReader bufferReader = Files.newBufferedReader(file.toPath())) {
String line = null;
while ((line = bufferReader.readLine()) != null){
String[] lineWords = line.split(CommonConstants.SPLIT_TEXT_REGEX);
for(String word : lineWords) {
if (word.length() > 0) {
queue.put(word.toLowerCase());
}
}
}
queue.put(STOP_THREAD);
} catch (InterruptedException | IOException e) {
e.printStackTrace();
}
}
}
class Consumer implements Runnable {
private BlockingQueue<String> queue;
private Map<String, Integer> wordsFrequency;
public Consumer(BlockingQueue<String> queue, Map<String, Integer> wordsFrequency) {
this.queue = queue;
this.wordsFrequency = wordsFrequency;
}
@Override
public void run() {
try {
String word = null;
while(!((word = queue.take()).equals(STOP_THREAD))) {
if (wordsFrequency.containsKey(word)) {
wordsFrequency.put(word, wordsFrequency.get(word).intValue() + 1);
} else {
wordsFrequency.put(word, 1);
}
}
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
答案 1 :(得分:0)
您应该使用Java 8流的并行执行功能:
Path path = FileSystems.getDefault().getPath(...);
Stream<String> words = Files.lines(path);
Map<String, Long> wordsFrequency = words.parallel()
.collect(Collectors.groupingBy(UnaryOperator.identity(),
Collectors.counting()));
答案 2 :(得分:0)
您的 putInMap 在具体的ForkJoinFrequencyReader实例上同步。 同时,您在计算方法中创建ForkJoinFrequencyReader的不同实例。 因此,您的同步根本不起作用,因为每个同步都与它自己的实例相关。要检查它,只需替换
上的 putInMapprivate void putInMap() {
synchronized (wordsFrequency) {
请阅读以下内容:http://www.cs.umd.edu/class/fall2013/cmsc433/examples/wordcount/WordCountParallel.java