让我们说我有一个巨大的Web服务器日志文件,该文件不适合内存。我需要将此文件流式传输到mapreduce方法并保存到数据库。我使用Java 8流API进行此操作。例如,在mapreduce过程之后,我得到了一个列表,例如,客户端消耗,ip消耗,内容消耗。但是,我的需求与示例中的需求不同。由于我不能共享代码,所以我只想举一个基本的例子。
通过Java 8 Stream Api,我想只读取一次文件,同时获取3个列表,而我正在并行或顺序传输文件。但是并行会很好。有什么办法吗?
答案 0 :(得分:7)
通常,通过自定义Collector
可以很容易地收集到除标准API之外的任何内容。就您而言,一次收集3个列表(只是一个小的示例,可以编译,因为您也无法共享代码):
private static <T> Collector<T, ?, List<List<T>>> to3Lists() {
class Acc {
List<T> left = new ArrayList<>();
List<T> middle = new ArrayList<>();
List<T> right = new ArrayList<>();
List<List<T>> list = Arrays.asList(left, middle, right);
void add(T elem) {
// obviously do whatever you want here
left.add(elem);
middle.add(elem);
right.add(elem);
}
Acc merge(Acc other) {
left.addAll(other.left);
middle.addAll(other.middle);
right.addAll(other.right);
return this;
}
public List<List<T>> finisher() {
return list;
}
}
return Collector.of(Acc::new, Acc::add, Acc::merge, Acc::finisher);
}
并通过以下方式使用它:
Stream.of(1, 2, 3)
.collect(to3Lists());
很显然,这个自定义收集器没有做任何有用的事情,而只是一个如何使用它的示例。
答案 1 :(得分:4)
我已根据您的情况改编了the answer to this question。定制的Spliterator会将流“拆分”为由不同属性收集的多个流:
@SafeVarargs
public static <T> long streamForked(Stream<T> source, Consumer<Stream<T>>... consumers)
{
return StreamSupport.stream(new ForkingSpliterator<>(source, consumers), false).count();
}
public static class ForkingSpliterator<T>
extends AbstractSpliterator<T>
{
private Spliterator<T> sourceSpliterator;
private List<BlockingQueue<T>> queues = new ArrayList<>();
private boolean sourceDone;
@SafeVarargs
private ForkingSpliterator(Stream<T> source, Consumer<Stream<T>>... consumers)
{
super(Long.MAX_VALUE, 0);
sourceSpliterator = source.spliterator();
for (Consumer<Stream<T>> fork : consumers)
{
LinkedBlockingQueue<T> queue = new LinkedBlockingQueue<>();
queues.add(queue);
new Thread(() -> fork.accept(StreamSupport.stream(new ForkedConsumer(queue), false))).start();
}
}
@Override
public boolean tryAdvance(Consumer<? super T> action)
{
sourceDone = !sourceSpliterator.tryAdvance(t -> queues.forEach(queue -> queue.offer(t)));
return !sourceDone;
}
private class ForkedConsumer
extends AbstractSpliterator<T>
{
private BlockingQueue<T> queue;
private ForkedConsumer(BlockingQueue<T> queue)
{
super(Long.MAX_VALUE, 0);
this.queue = queue;
}
@Override
public boolean tryAdvance(Consumer<? super T> action)
{
while (queue.peek() == null)
{
if (sourceDone)
{
// element is null, and there won't be no more, so "terminate" this sub stream
return false;
}
}
// push to consumer pipeline
action.accept(queue.poll());
return true;
}
}
}
您可以按以下方式使用它:
streamForked(Stream.of(new Row("content1", "client1", "location1", 1),
new Row("content2", "client1", "location1", 2),
new Row("content1", "client1", "location2", 3),
new Row("content2", "client2", "location2", 4),
new Row("content1", "client2", "location2", 5)),
rows -> System.out.println(rows.collect(Collectors.groupingBy(Row::getClient,
Collectors.groupingBy(Row::getContent,
Collectors.summingInt(Row::getConsumption))))),
rows -> System.out.println(rows.collect(Collectors.groupingBy(Row::getClient,
Collectors.groupingBy(Row::getLocation,
Collectors.summingInt(Row::getConsumption))))),
rows -> System.out.println(rows.collect(Collectors.groupingBy(Row::getContent,
Collectors.groupingBy(Row::getLocation,
Collectors.summingInt(Row::getConsumption))))));
// Output
// {client2={location2=9}, client1={location1=3, location2=3}}
// {client2={content2=4, content1=5}, client1={content2=2, content1=4}}
// {content2={location1=2, location2=4}, content1={location1=1, location2=8}}
请注意,使用流的副本您几乎可以做任何您想做的事情。按照您的示例,我使用了stacked groupingBy
collector to group the rows by two properties,然后总结了int属性。因此结果将是Map<String, Map<String, Integer>>
。但您也可以将其用于其他情况:
rows -> System.out.println(rows.count())
rows -> rows.forEach(row -> System.out.println(row))
rows -> System.out.println(rows.anyMatch(row -> row.getConsumption() > 3))