在Cassandra日志中执行LOGGED BATCH警告

时间:2019-02-06 07:20:27

标签: cassandra datastax datastax-enterprise batch-insert batching

我们做批处理的Java应用程序会插入表的1个中, 该表架构类似于..

CREATE TABLE "My_KeySpace"."my_table" (
    key text,
    column1 varint,
    column2 bigint,
    column3 text,
    column4 boolean,
    value blob,
    PRIMARY KEY (key, column1, column2, column3, column4)
) WITH CLUSTERING ORDER BY ( column1 DESC, column2 DESC, column3 ASC, column4 ASC )
AND COMPACT STORAGE
AND bloom_filter_fp_chance = 0.1
AND comment = ''
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 0
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = 'NONE'
AND caching = {
    'keys' : 'ALL',
    'rows_per_partition' : 'NONE'
}
AND compression = {
    'chunk_length_in_kb' : 64,
    'class' : 'LZ4Compressor',
    'enabled' : true
}
AND compaction = {
    'class' : 'LeveledCompactionStrategy',
    'sstable_size_in_mb' : 5
};
在上述架构中,

gc_grace_seconds = 0。因此,我收到以下警告:

2019-02-05 01:59:53.087 WARN   [SharedPool-Worker-5 - org.apache.cassandra.cql3.statements.BatchStatement:97] Executing a LOGGED BATCH on table [My_KeySpace.my_table], configured with a gc_grace_seconds of 0. The gc_grace_seconds is used to TTL batchlog entries, so setting gc_grace_seconds too low on tables involved in an atomic batch might cause batchlog entries to expire before being replayed.

我已经看到了Cassandra代码,出于明显的原因,该警告出现在:this line

在不更改应用程序批处理代码的情况下的任何解决方案?? 我应该增加gc_grace_seconds吗?

1 个答案:

答案 0 :(得分:0)

在Cassandra中,批次不是优化插入数据库的方法-它们通常用于coordinating writing into multiple tables, etc.,如果您使用批次将其插入多个分区,甚至可以获得{{3 }}。

通过使用异步命令执行(通过executeAsync)和/或通过使用批处理可以获得更好的插入吞吐量,但仅对于针对同一分区的插入。