我使用CQ v5.16.11(带有openjdk 11)来保存每天滚动周期的数据。 该过程从星期日到星期五不间断运行,因此我每周有5个cq4文件。我将流程运行了1.5周,共有8个文件(第1周3个文件,第2周5个文件)。
所以我拥有的文件是:
20181003.cq4 cycle=17807,
20181004.cq4 cycle=17808,
20181005.cq4 cycle=17809,
20181007.cq4 cycle=17811,
20181008.cq4 cycle=17812,
20181009.cq4 cycle=17813,
20181010.cq4 cycle=17814,
20181011.cq4 cycle=17815,
请注意缺少20181006.cq4的文件(周期= 17810),因为该进程不在周六运行。
我使用以下代码读取数据:
tailer.toEnd();
lastTailerIndex = tailer.index();
tailer.toStart();
while (tailer.index() <= lastTailerIndex) {
// read data
if (tailer.readBytes(data) {
/// do something with data bytes
}
if (tailer.index() == lastTailerIndex) {
break;
}
}
这会正确读取第一周数据,但不会读取第二周数据,因为它不会自动滚动到下一个周期。
知道为什么会这样或如何解决吗?
问题类似于旧版本中的问题
日志:
2018-10-12 12:41:15,784 DEBUG [main] net.openhft.chronicle.bytes.MappedFile - Allocation of 0 chunk in /site/data/metadata.cq4t took 19.237 ms.
2018-10-12 12:41:15,876 DEBUG [main] net.openhft.chronicle.bytes.MappedFile - Allocation of 0 chunk in /site/data/20181011.cq4 took 0.063 ms.
2018-10-12 12:41:15,881 DEBUG [main] net.openhft.chronicle.queue.impl.single.PretoucherState - /site/data/20181011.cq4 - Reset pretoucher to pos 4835096 as the underlying MappedBytes changed.
2018-10-12 12:41:15,887 DEBUG [main] net.openhft.chronicle.bytes.MappedFile - Allocation of 0 chunk in /site/data/20181003.cq4 took 0.065 ms.
2018-10-12 12:41:15,995 DEBUG [main] net.openhft.chronicle.bytes.MappedFile - Allocation of 0 chunk in /site/data/20181011.cq4 took 0.082 ms.
2018-10-12 12:41:15,996 DEBUG [main] net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder - File released /site/data/20181003.cq4
2018-10-12 12:41:15,997 DEBUG [main] net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder - File released /site/data/20181011.cq4
2018-10-12 12:41:16,418 DEBUG [main] net.openhft.chronicle.bytes.MappedFile - Allocation of 0 chunk in /site/data/20181004.cq4 took 0.112 ms.
2018-10-12 12:41:16,418 DEBUG [main] net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder - File released /site/data/20181003.cq4
2018-10-12 12:41:16,813 DEBUG [main] net.openhft.chronicle.bytes.MappedFile - Allocation of 0 chunk in /site/data/20181005.cq4 took 0.084 ms.
2018-10-12 12:41:16,813 DEBUG [main] net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder - File released /site/data/20181004.cq4
[编辑1]:
在上一个周末发生了同样的事情,即,按预期,10月13日没有新文件。现在,我有10月7日至10月15日的文件(缺少10月13日的文件)。如果我执行tailer.toStart(); while(tailer.readBytes() { ...}
,则它将仅读取10月7日至10月12日的文件,而不会读取10月14日和15日。
[编辑2]:复制了以下问题 Chronicle-Queue/issues/537
public class WriterProcess {
public static void main(String[] args) throws InterruptedException {
final String dir = "/tmp/demo/";
final LocalTime localTime = LocalTime.of(17, 0);
final ZoneId zoneID = ZoneId.of("America/New_York");
final ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(2);
final SingleChronicleQueue queue = SingleChronicleQueueBuilder.binary(dir)
.blockSize((long) Math.pow(2, 23))
.rollCycle(RollCycles.MINUTELY)
.rollTime(localTime, zoneID)
.build();
final ExcerptAppender appender = queue.acquireAppender();
// pre touch
scheduledExecutorService.scheduleAtFixedRate(appender::pretouch,0,30, TimeUnit.SECONDS);
// write data
System.out.println("writing data ...");
writeData(appender, 5);
// close queue
System.out.println("shutting down now ...");
queue.close();
scheduledExecutorService.shutdown();
scheduledExecutorService.awaitTermination(1, TimeUnit.SECONDS);
}
public static void writeData(ExcerptAppender appender, int count) {
int ctr = 0;
String dateStr;
Date date = new Date();
while (true) {
dateStr = date.toString();
appender.writeText("["+ctr+"] Written " + dateStr);
System.out.println("["+ctr+"] Written " + dateStr);
ctr++;
if (ctr >= count) {
break;
}
try {
Thread.sleep(65_000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
public class ReaderProcess {
public static void main(String[] args) {
final String dir = "/tmp/demo/";
final LocalTime localTime = LocalTime.of(17, 0);
final ZoneId zoneID = ZoneId.of("America/New_York");
final SingleChronicleQueue queue = SingleChronicleQueueBuilder.binary(dir)
.blockSize((long) Math.pow(2, 23))
.rollCycle(RollCycles.MINUTELY)
.rollTime(localTime, zoneID)
.build();
final ExcerptTailer tailer = queue.createTailer();
tailer.toStart();
// read data
System.out.println("reading data ...");
readData(tailer, 25);
// close
System.out.println("shutting down now ...");
queue.close();
}
public static void readData(ExcerptTailer tailer, int count) {
int ctr = 0;
Bytes data = Bytes.allocateDirect(new byte[500]);
while (true) {
if (tailer.readBytes(data)) {
System.out.println("["+ctr+"] Read {"+ data + "}");
ctr++;
if (ctr >= count) {
break;
}
}
}
}
}
答案 0 :(得分:0)
我写了一个稍微更简单的版本,该版本适用于编年史2.17及其使用的版本。我所做的最大更改是在读取之前清除了字节data
,否则它只会追加以免覆盖任何内容。
import net.openhft.chronicle.bytes.Bytes;
import net.openhft.chronicle.core.OS;
import net.openhft.chronicle.queue.ExcerptAppender;
import net.openhft.chronicle.queue.ExcerptTailer;
import net.openhft.chronicle.queue.RollCycles;
import net.openhft.chronicle.queue.impl.single.SingleChronicleQueue;
import net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder;
import java.time.LocalDateTime;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
public class WriterProcess {
static final String dir = OS.TMP + "/demo-" + System.nanoTime() + "/";
public static void main(String[] args) throws InterruptedException {
final ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(2);
final SingleChronicleQueue queue = SingleChronicleQueueBuilder.binary(dir)
.testBlockSize()
.rollCycle(RollCycles.TEST_SECONDLY)
.build();
final ExcerptAppender appender = queue.acquireAppender();
// pre touch
scheduledExecutorService.scheduleAtFixedRate(appender::pretouch, 3, 30, TimeUnit.SECONDS);
new Thread(ReaderProcess::main).start();
// write data
System.out.println("writing data ...");
writeData(appender, 100);
// close queue
System.out.println("shutting down now ...");
queue.close();
scheduledExecutorService.shutdown();
scheduledExecutorService.awaitTermination(1, TimeUnit.SECONDS);
}
public static void writeData(ExcerptAppender appender, int count) {
int ctr = 0;
while (true) {
LocalDateTime date = LocalDateTime.now();
appender.writeText("[" + ctr + "] Written " + date);
System.out.println("[" + ctr + "] Written " + date);
ctr++;
if (ctr >= count) {
break;
}
try {
Thread.sleep(2_200);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
class ReaderProcess {
public static void main(String... args) {
final String dir = WriterProcess.dir;
final SingleChronicleQueue queue = SingleChronicleQueueBuilder.binary(dir)
.testBlockSize()
.rollCycle(RollCycles.TEST_SECONDLY)
.build();
final ExcerptTailer tailer = queue.createTailer();
tailer.toStart();
// read data
System.out.println("reading data ...");
readData(tailer, 100);
// close
System.out.println("shutting down now ...");
queue.close();
}
public static void readData(ExcerptTailer tailer, int count) {
int ctr = 0;
Bytes data = Bytes.allocateDirect(64);
while (true) {
data.clear();
if (tailer.readBytes(data)) {
System.out.println("[" + ctr + "] Read {" + data + "}");
ctr++;
if (ctr >= count) {
break;
}
}
}
}
}