纪事队列V3。可以在数据块翻转时丢失条目吗?

时间:2017-06-30 15:24:27

标签: chronicle chronicle-queue

我有一个应用程序将条目写入一个Chronicle Queue(V3),它还通过在队列中提供索引访问来保留其他(Chronicle)Maps中的摘录条目索引值。有时我们找不到我们之前保存的给定条目,我相信它可能与数据块翻转有关。

以下是一个独立的测试程序,可以小规模地再现这些用例。它重复写入一个条目,并立即尝试使用单独的ExcerptTailer查找结果索引值。一切都很好,直到第一个数据块用完并分配了第二个数据文件,然后开始检索失败。如果增加数据块大小以避免翻转,则不会丢失任何条目。同样使用小索引数据块大小,导致创建多个索引文件,不会导致问题。

测试程序还尝试使用并行运行的 ExcerptListener 来查看条目是否显然已丢失'作者一直被读者所接受 - 他们并非如此。还尝试从开始到结束重新读取生成的队列,这再次确认它们确实丢失了。

踏步'代码,我看到当在 AbstractVanillarExcerpt #index 中查找缺少的条目时,它似乎成功地从dataCache中找到了正确的VanillaMappedBytes对象,但确定存在没有条目和数据偏移量为 len == 0 。除了未找到的条目之外,在翻转后问题开始发生之后的某个时刻,由于它已经传递了一个空文件,因此从 VanillaMappedFile#fileChannel 方法中抛出了一个NPE。路径。代码路径假设当解析条目在索引中成功查找时始终找到文件,但在这种情况下不是。

是否有可能跨数据块滚动可靠地使用Chronicle Queue,如果是这样,我在做什么可能会导致我遇到的问题?

import java.io.IOException;
import java.util.Collection;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Set;

import org.junit.Before;
import org.junit.Test;

import net.openhft.affinity.AffinitySupport;
import net.openhft.chronicle.Chronicle;
import net.openhft.chronicle.ChronicleQueueBuilder;
import net.openhft.chronicle.ExcerptAppender;
import net.openhft.chronicle.ExcerptCommon;
import net.openhft.chronicle.ExcerptTailer;
import net.openhft.chronicle.VanillaChronicle;

public class ChronicleTests {
    private static final int CQ_LEN = VanillaChronicle.Cycle.DAYS.length();
    private static final long CQ_ENT = VanillaChronicle.Cycle.DAYS.entries();
    private static final String ROOT_DIR = System.getProperty(ChronicleTests.class.getName() + ".ROOT_DIR",
            "C:/Temp/chronicle/");
    private static final String QDIR = System.getProperty(ChronicleTests.class.getName() + ".QDIR", "chronicleTests");
    private static final int DATA_SIZE = Integer
            .parseInt(System.getProperty(ChronicleTests.class.getName() + ".DATA_SIZE", "100000"));
    // Chunk file size of CQ index
    private static final int INDX_SIZE = Integer
            .parseInt(System.getProperty(ChronicleTests.class.getName() + ".INDX_SIZE", "10000"));
    private static final int Q_ENTRIES = Integer
            .parseInt(System.getProperty(ChronicleTests.class.getName() + ".Q_ENTRIES", "5000"));
    // Data type id
    protected static final byte FSYNC_DATA = 1;
    protected static final byte NORMAL_DATA = 0;
    protected static final byte TH_START_DATA = -1;
    protected static final byte TH_END_DATA = -2;
    protected static final byte CQ_START_DATA = -3;
    private static final long MAX_RUNTIME_MILLISECONDS = 30000;

    private static String PAYLOAD_STRING = "1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
    private static byte PAYLOAD_BYTES[] = PAYLOAD_STRING.getBytes();

    private Chronicle _chronicle;
    private String _cqPath = ROOT_DIR + QDIR;

    @Before
    public void init() {
        buildCQ();
    }

    @Test
    public void test() throws IOException, InterruptedException {
        boolean passed = true;
        Collection<Long> missingEntries = new LinkedList<Long>();
        long sent = 0;
        Thread listener = listen();
        try {
            listener.start();
            // Write entries to CQ, 
            for (int i = 0; i < Q_ENTRIES; i++) {
                long entry = writeQEntry(PAYLOAD_BYTES, (i % 100) == 0);
                sent++;
                // check each entry can be looked up
                boolean found = checkEntry(i, entry);
                if (!found)
                    missingEntries.add(entry);
                passed &= found;
            }
            // Wait awhile for the listener
            listener.join(MAX_RUNTIME_MILLISECONDS);
            if (listener.isAlive())
                listener.interrupt();
        } finally {
            if (listener.isAlive()) { // => exception raised so wait for listener
                log("Give listener a chance....");
                sleep(MAX_RUNTIME_MILLISECONDS);
                listener.interrupt();
            }
            log("Sent: " + sent + " Received: " + _receivedEntries.size());
            // Look for missing entries in receivedEntries 
            missingEntries.forEach(me -> checkMissingEntry(me));
            log("All passed? " + passed);
            // Try to find missing entries by searching from the start...
            searchFromStartFor(missingEntries);
            _chronicle.close();
            _chronicle = null;
            // Re-initialise CQ and look for missing entries again...
            log("Re-initialise");
            init();
            searchFromStartFor(missingEntries);
        }
    }

    private void buildCQ() {
        try {
            // build chronicle queue
            _chronicle = ChronicleQueueBuilder.vanilla(_cqPath).cycleLength(CQ_LEN).entriesPerCycle(CQ_ENT)
                    .indexBlockSize(INDX_SIZE).dataBlockSize(DATA_SIZE).build();
        } catch (IOException e) {
            throw new InitializationException("Failed to initialize Active Trade Store.", e);
        }
    }

    private long writeQEntry(byte dataArray[], boolean fsync) throws IOException {
        ExcerptAppender appender = _chronicle.createAppender();
        return writeData(appender, dataArray, fsync);
    }

    private boolean checkEntry(int seqNo, long entry) throws IOException {
        ExcerptTailer tailer = _chronicle.createTailer();
        if (!tailer.index(entry)) {
            log("SeqNo: " + seqNo + " for entry + " + entry + " not found");
            return false;
        }
        boolean isMarker = isMarker(tailer);
        boolean isFsyncData = isFsyncData(tailer);
        boolean isNormalData = isNormalData(tailer);
        String type = isMarker ? "MARKER" : isFsyncData ? "FSYNC" : isNormalData ? "NORMALDATA" : "UNKNOWN";
        log("Entry: " + entry + "(" + seqNo + ") is " + type);
        return true;
    }

    private void log(String string) {
        System.out.println(string);
    }

    private void searchFromStartFor(Collection<Long> missingEntries) throws IOException {
        Set<Long> foundEntries = new HashSet<Long>(Q_ENTRIES);
        ExcerptTailer tailer = _chronicle.createTailer();
        tailer.toStart();
        while (tailer.nextIndex())
            foundEntries.add(tailer.index());
        Iterator<Long> iter = missingEntries.iterator();
        long foundCount = 0;
        while (iter.hasNext()) {
            long me = iter.next();
            if (foundEntries.contains(me)) {
                log("Found missing entry: " + me);
                foundCount++;
            }
        }
        log("searchFromStartFor Found: " + foundCount + " of: " + missingEntries.size() + " missing entries");
    }

    private void checkMissingEntry(long missingEntry) {
        if (_receivedEntries.contains(missingEntry))
            log("Received missing entry:" + missingEntry);
    }

    Set<Long> _receivedEntries = new HashSet<Long>(Q_ENTRIES);

    private Thread listen() {
        Thread returnVal = new Thread("Listener") {

            public void run() {
                try {
                    int receivedCount = 0;
                    ExcerptTailer tailer = _chronicle.createTailer();
                    tailer.toStart();
                    while (receivedCount < Q_ENTRIES) {
                        if (tailer.nextIndex()) {
                            _receivedEntries.add(tailer.index());
                        } else {
                            ChronicleTests.this.sleep(1);
                        }
                    }
                    log("listener complete");
                } catch (IOException e) {
                    log("Interupted before receiving all entries");
                }
            }
        };
        return returnVal;
    }

    private void sleep(long interval) {
        try {
            Thread.sleep(interval);
        } catch (InterruptedException e) {
            // No action required
        }
    }

    protected static final int THREAD_ID_LEN = Integer.SIZE / Byte.SIZE;
    protected static final int DATA_TYPE_LEN = Byte.SIZE / Byte.SIZE;
    protected static final int TIMESTAMP_LEN = Long.SIZE / Byte.SIZE;
    protected static final int CRC_LEN = Long.SIZE / Byte.SIZE;

    protected static long writeData(ExcerptAppender appender, byte dataArray[],
            boolean fsync) {
        appender.startExcerpt(DATA_TYPE_LEN + THREAD_ID_LEN + dataArray.length
                + CRC_LEN);
        appender.nextSynchronous(fsync);
        if (fsync) {
            appender.writeByte(FSYNC_DATA);
        } else {
            appender.writeByte(NORMAL_DATA);
        }
        appender.writeInt(AffinitySupport.getThreadId());
        appender.write(dataArray);
        appender.writeLong(CRCCalculator.calcDataAreaCRC(appender));
        appender.finish();
        return appender.lastWrittenIndex();
    }

    protected static boolean isMarker(ExcerptCommon excerpt) {
        if (isCqStartMarker(excerpt) || isStartMarker(excerpt) || isEndMarker(excerpt)) {
            return true;
        }
        return false;
    }

    protected static boolean isCqStartMarker(ExcerptCommon excerpt) {
        return isDataTypeMatched(excerpt, CQ_START_DATA);
    }

    protected static boolean isStartMarker(ExcerptCommon excerpt) {
        return isDataTypeMatched(excerpt, TH_START_DATA);
    }

    protected static boolean isEndMarker(ExcerptCommon excerpt) {
        return isDataTypeMatched(excerpt, TH_END_DATA);
    }

    protected static boolean isData(ExcerptTailer tailer, long index) {
        if (!tailer.index(index)) {
            return false;
        }
        return isData(tailer);
    }

    private static void movePosition(ExcerptCommon excerpt, long position) {
        if (excerpt.position() != position)
            excerpt.position(position);
    }

    private static void moveToFsyncFlagPos(ExcerptCommon excerpt) {
        movePosition(excerpt, 0);
    }

    private static boolean isDataTypeMatched(ExcerptCommon excerpt, byte type) {
        moveToFsyncFlagPos(excerpt);
        byte b = excerpt.readByte();
        if (b == type) {
            return true;
        }
        return false;
    }

    protected static boolean isNormalData(ExcerptCommon excerpt) {
        return isDataTypeMatched(excerpt, NORMAL_DATA);
    }

    protected static boolean isFsyncData(ExcerptCommon excerpt) {
        return isDataTypeMatched(excerpt, FSYNC_DATA);
    }

    /**
     * Check if this entry is Data
     * 
     * @param excerpt
     * @return true if the entry is data
     */
    protected static boolean isData(ExcerptCommon excerpt) {
        if (isNormalData(excerpt) || isFsyncData(excerpt)) {
            return true;
        }
        return false;
    }

}

2 个答案:

答案 0 :(得分:1)

仅在使用不是2的幂的值初始化数据块大小时才会出现此问题。 small()medium()large()RSpec.describe Api::V1::Public::Signin::ByEmailController, type: :controller do describe 'POST #create' do context 'when the provider is "email"' do context 'when there is a saved email' do context 'when the password is good' do it 'signs in' do expect(current_metadata_description).to eq 'Api::V1::Public::Signin::ByEmailController POST #create when the provider is "email" when there is a saved email when the password is good signs in' end end end end end end )上的内置配置注意使用2的幂来初始化,这提供了关于适当用法的线索。

尽管有上述关于支持的回应,我完全理解,如果知识渊博的Chronicle用户可以确认Chronicle Queue的完整性取决于使用2的幂的数据块大小,那将是有用的。

答案 1 :(得分:0)

对不起,我们不提供对编年史队列3的免费支持,我们只调查和修复我们的开源库的最新主干版本的问题,这是除非您持有支持合同。

您可能希望尝试升级到最新版本的chronicle队列。

如果您想了解我们各种支持合同的更多信息,请发送电子邮件至sales@chronicle.software。