如何创建通用分页拼接器?

时间:2016-06-30 16:05:11

标签: java java-8 java-stream spliterator

我希望能够处理从必须在页面中访问的源读取的java流。作为第一种方法,我实现了一个分页迭代器,它只是在当前页面用尽项目时请求页面,然后使用StreamSupport.stream(iterator, false)来获取迭代器上的流句柄。

当我发现我的页面获取非常昂贵时,我想通过并行流访问页面。在这一点上,我发现由于java直接从迭代器提供的spliterator实现,我的天真方法提供的并行性是不存在的。因为我实际上对我想要遍历的元素了解了很多(我知道在请求第一页后总结果数,并且源支持偏移和限制)我认为应该可以实现我自己的分裂器实现真正的并发(在页面元素和页面查询上完成的工作)。

我已经能够很容易地实现“在元素上完成的工作”并发性,但在我的初始实现中,页面的查询只能由最顶层的分裂器完成,因此不会受益于fork-join实现提供的工作分工。

如何编写实现这两个目标的分裂器?

作为参考,我将提供我到目前为止所做的事情(我知道它不会恰当地划分查询)。

   public final class PagingSourceSpliterator<T> implements Spliterator<T> {

    public static final long DEFAULT_PAGE_SIZE = 100;

    private Page<T> result;
    private Iterator<T> results;
    private boolean needsReset = false;
    private final PageProducer<T> generator;
    private long offset = 0L;
    private long limit = DEFAULT_PAGE_SIZE;


    public PagingSourceSpliterator(PageProducer<T> generator) {
        this.generator = generator;
    }

    public PagingSourceSpliterator(long pageSize, PageProducer<T> generator) {
        this.generator = generator;
        this.limit = pageSize;
    }


    @Override
    public boolean tryAdvance(Consumer<? super T> action) {

        if (hasAnotherElement()) {
            if (!results.hasNext()) {
                loadPageAndPrepareNextPaging();
            }
            if (results.hasNext()) {
                action.accept(results.next());
                return true;
            }
        }

        return false;
    }

    @Override
    public Spliterator<T> trySplit() {
        // if we know there's another page, go ahead and hand off whatever
        // remains of this spliterator as a new spliterator for other
        // threads to work on, and then mark that next time something is
        // requested from this spliterator it needs to be reset to the head
        // of the next page
        if (hasAnotherPage()) {
            Spliterator<T> other = result.getPage().spliterator();
            needsReset = true;
            return other;
        } else {
            return null;
        }

    }

    @Override
    public long estimateSize() {
        if(limit == 0) {
            return 0;
        }

        ensureStateIsUpToDateEnoughToAnswerInquiries();
        return result.getTotalResults();
    }

    @Override
    public int characteristics() {
        return IMMUTABLE | ORDERED | DISTINCT | NONNULL | SIZED | SUBSIZED;
    }

    private boolean hasAnotherElement() {
        ensureStateIsUpToDateEnoughToAnswerInquiries();
        return isBound() && (results.hasNext() || hasAnotherPage());
    }

    private boolean hasAnotherPage() {
        ensureStateIsUpToDateEnoughToAnswerInquiries();
        return isBound() && (result.getTotalResults() > offset);
    }

    private boolean isBound() {
        return Objects.nonNull(results) && Objects.nonNull(result);
    }

    private void ensureStateIsUpToDateEnoughToAnswerInquiries() {
        ensureBound();
        ensureResetIfNecessary();
    }

    private void ensureBound() {
        if (!isBound()) {
            loadPageAndPrepareNextPaging();
        }
    }

    private void ensureResetIfNecessary() {
        if(needsReset) {
            loadPageAndPrepareNextPaging();
            needsReset = false;
        }
    }

    private void loadPageAndPrepareNextPaging() {
        // keep track of the overall result so that we can reference the original list and total size
        this.result = generator.apply(offset, limit);

        // make sure that the iterator we use to traverse a single page removes
        // results from the underlying list as we go so that we can simply pass
        // off the list spliterator for the trySplit rather than constructing a
        // new kind of spliterator for what remains.
        this.results = new DelegatingIterator<T>(result.getPage().listIterator()) {
            @Override
            public T next() {
                T next = super.next();
                this.remove();
                return next;
            }
        };

        // update the paging for the next request and inquiries prior to the next request
        // we use the page of the actual result set instead of the limit in case the limit
        // was not respected exactly.
        this.offset += result.getPage().size();
    }

    public static class DelegatingIterator<T> implements Iterator<T> {

        private final Iterator<T> iterator;

        public DelegatingIterator(Iterator<T> iterator) {
            this.iterator = iterator;
        }


        @Override
        public boolean hasNext() {
            return iterator.hasNext();
        }

        @Override
        public T next() {
            return iterator.next();
        }

        @Override
        public void remove() {
            iterator.remove();
        }

        @Override
        public void forEachRemaining(Consumer<? super T> action) {
            iterator.forEachRemaining(action);
        }
    }
}

我的网页来源:

public interface PageProducer<T> extends BiFunction<Long, Long, Page<T>> {

}

还有一页:

public final class Page<T> {

    private long totalResults;
    private final List<T> page = new ArrayList<>();

    public long getTotalResults() {
        return totalResults;
    }

    public List<T> getPage() {
        return page;
    }

    public Page setTotalResults(long totalResults) {
        this.totalResults = totalResults;
        return this;
    }

    public Page setPage(List<T> results) {
        this.page.clear();
        this.page.addAll(results);
        return this;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) {
            return true;
        }
        if (!(o instanceof Page)) {
            return false;
        }
        Page<?> page1 = (Page<?>) o;
        return totalResults == page1.totalResults && Objects.equals(page, page1.page);
    }

    @Override
    public int hashCode() {
        return Objects.hash(totalResults, page);
    }

}

获取带有“慢”分页的流进行测试的示例

private <T> Stream<T> asSlowPagedSource(long pageSize, List<T> things) {

    PageProducer<T> producer = (offset, limit) -> {

        try {
            Thread.sleep(5000L);
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }

        int beginIndex = offset.intValue();
        int endIndex = Math.min(offset.intValue() + limit.intValue(), things.size());
        return new Page<T>().setTotalResults(things.size())
                .setPage(things.subList(beginIndex, endIndex));
    };

    return StreamSupport.stream(new PagingSourceSpliterator<>(pageSize, producer), true);
}

2 个答案:

答案 0 :(得分:8)

主要原因是,你的分裂者并没有让你更接近你的目标,是它试图分割页面,而不是源元素空间。如果您知道元素的总数并且具有允许通过偏移和限制来获取页面的源,则最自然的分裂器形式是在这些元素中封装范围,例如,通过偏移和限制或结束。然后,拆分意味着只拆分范围,使拆分器的偏移量适应拆分位置,并创建一个表示前缀的新拆分器,从“旧偏移”到拆分位置。

Before splitting:
      this spliterator: offset=x, end=y
After splitting:
      this spliterator: offset=z, end=y
  returned spliterator: offset=x, end=z

x <= z <= y

在最好的情况下,z恰好位于xy之间,以产生平衡分割,但在我们的情况下,我们会略微调整它以产生工作设置为页面大小的倍数。

这个逻辑可以在不需要获取页面的情况下工作,所以如果你推迟到目前为止提取页面,框架想要开始遍历,即分割之后,获取操作可以并行运行。最大的障碍是您需要获取第一页才能了解元素的总数。下面的解决方案将第一次提取与其余提取分开,简化了实现。当然,它必须传递第一次页面提取的结果,这将在第一次遍历(在顺序情况下)中消耗或作为第一次拆分前缀返回,在此时接受一次不平衡拆分,但没有之后再处理它。

public class PagingSpliterator<T> implements Spliterator<T> {
    public interface PageFetcher<T> {
        List<T> fetchPage(long offset, long limit, LongConsumer totalSizeSink);
    }
    public static final long DEFAULT_PAGE_SIZE = 100;

    public static <T> Stream<T> paged(PageFetcher<T> pageAccessor) {
        return paged(pageAccessor, DEFAULT_PAGE_SIZE, false);
    }
    public static <T> Stream<T> paged(PageFetcher<T> pageAccessor,
                                      long pageSize, boolean parallel) {
        if(pageSize<=0) throw new IllegalArgumentException();
        return StreamSupport.stream(() -> {
            PagingSpliterator<T> pgSp
                = new PagingSpliterator<>(pageAccessor, 0, 0, pageSize);
            pgSp.danglingFirstPage
                =spliterator(pageAccessor.fetchPage(0, pageSize, l -> pgSp.end=l));
            return pgSp;
        }, CHARACTERISTICS, parallel);
    }
    private static final int CHARACTERISTICS = IMMUTABLE|ORDERED|SIZED|SUBSIZED;

    private final PageFetcher<T> supplier;
    long start, end, pageSize;
    Spliterator<T> currentPage, danglingFirstPage;

    PagingSpliterator(PageFetcher<T> supplier,
            long start, long end, long pageSize) {
        this.supplier = supplier;
        this.start    = start;
        this.end      = end;
        this.pageSize = pageSize;
    }

    public boolean tryAdvance(Consumer<? super T> action) {
        for(;;) {
            if(ensurePage().tryAdvance(action)) return true;
            if(start>=end) return false;
            currentPage=null;
        }
    }
    public void forEachRemaining(Consumer<? super T> action) {
        do {
            ensurePage().forEachRemaining(action);
            currentPage=null;
        } while(start<end);
    }
    public Spliterator<T> trySplit() {
        if(danglingFirstPage!=null) {
            Spliterator<T> fp=danglingFirstPage;
            danglingFirstPage=null;
            start=fp.getExactSizeIfKnown();
            return fp;
        }
        if(currentPage!=null)
            return currentPage.trySplit();
        if(end-start>pageSize) {
            long mid=(start+end)>>>1;
            mid=mid/pageSize*pageSize;
            if(mid==start) mid+=pageSize;
            return new PagingSpliterator<>(supplier, start, start=mid, pageSize);
        }
        return ensurePage().trySplit();
    }
    /**
     * Fetch data immediately before traversing or sub-page splitting.
     */
    private Spliterator<T> ensurePage() {
        if(danglingFirstPage!=null) {
            Spliterator<T> fp=danglingFirstPage;
            danglingFirstPage=null;
            currentPage=fp;
            start=fp.getExactSizeIfKnown();
            return fp;
        }
        Spliterator<T> sp = currentPage;
        if(sp==null) {
            if(start>=end) return Spliterators.emptySpliterator();
            sp = spliterator(supplier.fetchPage(
                                 start, Math.min(end-start, pageSize), l->{}));
            start += sp.getExactSizeIfKnown();
            currentPage=sp;
        }
        return sp;
    }
    /**
     * Ensure that the sub-spliterator provided by the List is compatible with
     * ours, i.e. is {@code SIZED | SUBSIZED}. For standard List implementations,
     * the spliterators are, so the costs of dumping into an intermediate array
     * in the other case is irrelevant.
     */
    private static <E> Spliterator<E> spliterator(List<E> list) {
        Spliterator<E> sp = list.spliterator();
        if((sp.characteristics()&(SIZED|SUBSIZED))!=(SIZED|SUBSIZED))
            sp=Spliterators.spliterator(
                StreamSupport.stream(sp, false).toArray(), IMMUTABLE | ORDERED);
        return sp;
    }
    public long estimateSize() {
        if(currentPage!=null) return currentPage.estimateSize();
        return end-start;
    }
    public int characteristics() {
        return CHARACTERISTICS;
    }
}

它使用专门的PageFetcher功能接口,可以通过调用回调的accept方法以及生成的总大小并返回项目列表来实现。分页分割器将简单地委托给列表的spliterator进行遍历,如果并发性明显高于结果页数,它甚至可以从拆分这些页面分裂器中受益,这意味着随机访问列表,如ArrayList ,是这里的首选列表类型。

将您的示例代码调整为

private static <T> Stream<T> asSlowPagedSource(long pageSize, List<T> things) {
    return PagingSpliterator.paged( (offset, limit, totalSizeSink) -> {
        totalSizeSink.accept(things.size());
        if(offset>things.size()) return Collections.emptyList();
        int beginIndex = (int)offset;
        assert beginIndex==offset;
        int endIndex = Math.min(beginIndex+(int)limit, things.size());
        System.out.printf("Page %6d-%6d:\t%s%n",
                          beginIndex, endIndex, Thread.currentThread());
        // artificial slowdown
        LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(5));
        return things.subList(beginIndex, endIndex);
    }, pageSize, true);
}

您可以像

一样进行测试
List<Integer> samples=IntStream.range(0, 555_000).boxed().collect(Collectors.toList());
List<Integer> result =asSlowPagedSource(10_000, samples) .collect(Collectors.toList());
if(!samples.equals(result))
    throw new AssertionError();

如果有足够的可用CPU内核,它将演示如何同时获取页面,因此无序,而结果将正确地处于遭遇顺序。您还可以测试在页面较少时适用的子页面并发性:

Set<Thread> threads=ConcurrentHashMap.newKeySet();
List<Integer> samples=IntStream.range(0, 1_000_000).boxed().collect(Collectors.toList());
List<Integer> result=asSlowPagedSource(500_000, samples)
    .peek(x -> threads.add(Thread.currentThread()))
    .collect(Collectors.toList());
if(!samples.equals(result))
    throw new AssertionError();
System.out.println("Concurrency: "+threads.size());

答案 1 :(得分:0)

https://docs.oracle.com/javase/8/docs/api/java/util/Spliterator.html

据我所知,分裂的速度来自不变性。源不可变,处理越快,因为不变性可以更好地提供并行处理或者分裂。

看起来似乎是在对整个(最好的)或部分(通常情况下,因此你和其他许多人挑战)绑定到分裂器之前,尽可能最好地解决源的变化(如果有的话)。

在您的情况下,这可能意味着首先确保页面大小而不是:

//.. in case the limit was not respected exactly. this.offset += result.getPage().size();

这也可能意味着需要准备流馈送而不是用作直接来源。

在文档末尾有一个示例&#34;并行计算框架(如java.util.stream包)如何在并行计算中使用Spliterator&#34;

注意它是如何使用spliterator的流,而不是spliterator如何使用流作为源。

有一个有趣的&#34;计算&#34;示例末尾的方法。

如果您获得通用的有效PageSpliterator课程,请确保让我们中的一些人知道。

欢呼声。