尝试从键空间

时间:2017-04-07 13:23:43

标签: cassandra replication

我的Cassandra(3.0.2)复制集中有3个节点。我的一致性水平是" ONE"。在开始时,我的所有键空间都具有等于1的复制因子。我通过表更改来更改它并运行" nodetool repair"在所有节点上。现在,当我试图选择一些数据(而不是每个键空间)时,我得到类似这样的东西(select from from keyspace.table):

  

追踪(最近一次通话):     文件" /usr/bin/cqlsh.py",第1258行,在perform_simple_statement中       result = future.result()     文件" cassandra / cluster.py",第3781行,cassandra.cluster.ResponseFuture.result(cassandra / cluster.c:73073)       提升self._final_exception   ReadFailure:来自服务器的错误:代码= 1300 [副本未能执行读取]消息="操作失败 - 收到0个响应和1个失败" info = {' failure':1,' received_responses':0,' required_responses':1,' consistency':' ONE& #39;}

在" /var/log/cassandra/system.log"我明白了:

  

警告[SharedPool-Worker-2] 2017-04-07 12:46:20,036 AbstractTracingAwareExecutorService.java:169 - 线程上的未捕获异常线程[SharedPool-Worker-2,5,main]:{}   java.lang.AssertionError:null       在org.apache.cassandra.db.columniterator.AbstractSSTableIterator $ IndexState.updateBlock(AbstractSSTableIterator.java:463)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.columniterator.SSTableIterator $ ForwardIndexedReader.computeNext(SSTableIterator.java:268)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.columniterator.SSTableIterator $ ForwardReader.hasNextInternal(SSTableIterator.java:158)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.columniterator.AbstractSSTableIterator $ Reader.hasNext(AbstractSSTableIterator.java:352)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:219)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.utils.MergeIterator $ Candidate.advance(MergeIterator.java:369)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.utils.MergeIterator $ ManyToOne.advance(MergeIterator.java:189)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.utils.MergeIterator $ ManyToOne.computeNext(MergeIterator.java:158)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.rows.UnfilteredRowIterators $ UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:426)〜[apache-cassandra-3.0.2.jar:3.0.2]       at org.apache.cassandra.db.rows.UnfilteredRowIterators $ UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:286)~ [apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:108)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:131)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:87)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:77)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.partitions.UnfilteredPartitionIterators $ Serializer.serialize(UnfilteredPartitionIterators.java:298)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.ReadResponse $ LocalDataResponse.build(ReadResponse.java:136)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.ReadResponse $ LocalDataResponse。(ReadResponse.java:128)~ [apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.ReadResponse $ LocalDataResponse。(ReadResponse.java:123)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:289)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.service.StorageProxy $ LocalReadRunnable.runMayThrow(StorageProxy.java:1721)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.service.StorageProxy $ DroppableRunnable.run(StorageProxy.java:2375)〜[apache-cassandra-3.0.2.jar:3.0.2]       at java.util.concurrent.Executors $ RunnableAdapter.call(Executors.java:511)~ [na:1.8.0_121]       在org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService $ FutureTask.run(AbstractTracingAwareExecutorService.java:164)〜[apache-cassandra-3.0.2.jar:3.0.2]       在org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)[apache-cassandra-3.0.2.jar:3.0.2]       在java.lang.Thread.run(Thread.java:745)[na:1.8.0_121]   DEBUG [SharedPool-Worker-1] 2017-04-07 12:46:20,037 ReadCallback.java:126 - 失败;收到0回复1

我也得到了:

  

DEBUG [SharedPool-Worker-1] 2017-04-07 13:20:30,002 ReadCallback.java:126 - 超时;收到0回复1

我检查了端口9042和7000上的节点之间是否存在连接。我更改了" /etc/cassandra/cassandra.yml"喜欢" read_request_timeout_in_ms"," range_request_timeout_in_ms"," write_request_timeout_in_ms" oraz" truncate_request_timeout_in_ms"。我更改了文件"〜/ .cassandra / cqlshrc"和选项" client_timeout = 3600"。另外,当我这样做时,#34;从keyspace.table中选择*,其中column1 =' value' and column2 = value"我明白了:

  

ReadTimeout:来自服务器的错误:代码= 1200 [协调器节点超时等待副本节点'响应]消息="操作超时 - 仅收到0个响应。" info = {' received_responses':0,' required_responses':1,'一致性':' ONE'}

有什么想法吗?

3 个答案:

答案 0 :(得分:0)

这或多或少是一个评论,但因为有很多话要说它不符合评论。

对于您更改值的复制因子,这将是非常好的。我只是假设它是3,因为它非常标准。然后再次,因为你只有3人的集群有时将RF设置为2.你还提到你更新了表上的复制因子。据我所知,复制因子是在键空间级别设置的。

如果您发布了发生错误的键空间的描述,那将非常有用。

考虑到select * from something可能在您的群集中变得非常密集,特别是如果您有大量数据。如果你在cqlsh中进行这个查询,你可能会回来10 000然后再次只提到cqlsh而没有应用程序代码,所以我只是要注意这个等等。

请您提供nodetool status,以确保您实际上没有关闭某些节点运行查询。因为第一个错误就是这样。

你发布了一个堆栈跟踪的第二个错误,看起来你错过了磁盘上的一些sstables?是否有可能某些其他进程以某种方式操纵sstables?

你也改变了cassandra.yaml中的很多属性,基本上你把预期的响应时间减少了近50%,我想毫无疑问节点没有时间回应......依靠整个表通常需要超过3.6秒

推断为什么这些值被改变的原因很简单。

答案 1 :(得分:0)

MarkoŠvaljek,是的我将复制因子从1更改为3(因为我的复制中有3个节点)。你是对的;你改变了键空间的复制因子,这就是我所做的。在这里你有我的键空间的描述,通常我会得到一些错误(但当然它与其他键空间一起发生):

soi@cqlsh> desc keyspace engine;

CREATE KEYSPACE engine WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true;

CREATE TABLE engine.messages (
    persistence_id text,
    partition_nr bigint,
    sequence_nr bigint,
    timestamp timeuuid,
    timebucket text,
    message blob,
    tag1 text,
    tag2 text,
    tag3 text,
    used boolean static,
    PRIMARY KEY ((persistence_id, partition_nr), sequence_nr, timestamp, timebucket)
) WITH CLUSTERING ORDER BY (sequence_nr ASC, timestamp ASC, timebucket ASC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'bucket_high': '1.5', 'bucket_low': '0.5', 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'enabled': 'true', 'max_threshold': '32', 'min_sstable_size': '50', 'min_threshold': '4', 'tombstone_compaction_interval': '86400', 'tombstone_threshold': '0.2', 'unchecked_tombstone_compaction': 'false'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';

CREATE MATERIALIZED VIEW engine.eventsbytag1 AS
    SELECT tag1, timebucket, timestamp, persistence_id, partition_nr, sequence_nr, message
    FROM engine.messages
    WHERE persistence_id IS NOT NULL AND partition_nr IS NOT NULL AND sequence_nr IS NOT NULL AND tag1 IS NOT NULL AND timestamp IS NOT NULL AND timebucket IS NOT NULL
    PRIMARY KEY ((tag1, timebucket), timestamp, persistence_id, partition_nr, sequence_nr)
    WITH CLUSTERING ORDER BY (timestamp ASC, persistence_id ASC, partition_nr ASC, sequence_nr ASC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';

CREATE TABLE engine.config (
    property text PRIMARY KEY,
    value text
) WITH bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';

CREATE TABLE engine.metadata (
    persistence_id text PRIMARY KEY,
    deleted_to bigint,
    properties map<text, text>
) WITH bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';

通常我得到代码错误号。你可以在第一篇文章中看到1200或1300。在这里你有我的“nodetool status”:

ubuntu@cassandra-db1:~$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens       Owns    Host ID                               Rack
UN  192.168.1.13  3.94 MB    256          ?       8ebcc3fe-9869-44c5-b7a5-e4f0f5a0beb1  rack1
UN  192.168.1.14  4.26 MB    256          ?       977831cb-98fe-4170-ab15-2b4447559003  rack1
UN  192.168.1.15  4.94 MB    256          ?       7515a967-cbdc-4d89-989b-c0a2f124173f  rack1

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless

我不这么认为其他一些进程可以操纵磁盘上的某些数据。我将补充说,我有类似的复制,我有更多的数据,我没有这样的问题。

答案 2 :(得分:0)

固定!我将Cassandra版本从3.0.2更改为3.0.9并解决了问题。