每个人
几天前,我将我们的6节点EC2集群从cassandra 2.1.4升级到2.1.5。
从那时起,我的所有节点都爆炸了#34;在他们的CPU使用率 - 他们在大部分时间内处于100%cpu,他们的平均负载在100-300(!!!)之间。
升级后没有立即启动。它在几个小时后开始使用其中一个节点,慢慢地,越来越多的节点开始表现出相同的行为。 它似乎与我们最大的列系列的压缩相关联,并且在压缩完成后(启动后约24小时),节点似乎恢复正常。它只用了2天左右,所以我希望它不会再发生,但我仍在监视它。
以下是我的问题
如果是预期的行为 -
如果是错误 -
对此的任何反馈都很棒
由于
阿米尔
更新
这是相关表格的结构。
CREATE TABLE tbl1(
key text PRIMARY KEY,
created_at timestamp,
customer_id bigint,
device_id bigint,
event text,
fail_count bigint,
generation bigint,
gr_id text,
imei text,
raw_post text,
"timestamp" timestamp
)具有紧凑的存储空间
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = 'NONE';
日志显示不多(至少对我而言)。这是日志外观的片段
INFO [WRITE- / 10.0.1.142] 2015-05-23 05:43:42,577 YamlConfigurationLoader.java:92 - 从文件加载设置:/etc/cassandra/cassandra.yaml
INFO [WRITE- / 10.0.1.142] 2015-05-23 05:43:42,580 YamlConfigurationLoader.java:135 - 节点配置:[authenticator = AllowAllAuthenticator;授权= AllowAllAuthorizer; auto_snapshot = TRUE; batch_size_warn_threshold_in_kb = 5; batchlog_replay_throttle_in_kb = 1024; broadcast_rpc_address = 10.0.2.145; cas_contention_timeout_in_ms = 1000; client_encryption_options =; cluster_name = Gryphonet21集群; column_index_size_in_kb = 64; commit_failure_policy =停止; commitlog_directory = /数据/卡桑德拉/ commitlog; commitlog_segment_size_in_mb = 32; commitlog_sync =周期; commitlog_sync_period_in_ms = 10000; compaction_throughput_mb_per_sec = 16; concurrent_counter_writes = 32; concurrent_reads = 32; concurrent_writes = 32; counter_cache_save_period = 7200; counter_cache_size_in_mb = NULL; counter_write_request_timeout_in_ms = 5000; cross_node_timeout = FALSE; data_file_directories = [/数据/卡桑德拉/数据]; disk_failure_policy =停止; dynamic_snitch_badness_threshold = 0.1; dynamic_snitch_reset_interval_in_ms = 600000; dynamic_snitch_update_interval_in_ms = 100; endpoint_snitch = GossipingPropertyFileSnitch; hinted_handoff_enabled = TRUE; hinted_handoff_throttle_in_kb = 1024; incremental_backups = FALSE; index_summary_capacity_in_mb = NULL; index_summary_resize_interval_in_minutes = 60; inter_dc_tcp_nodelay = FALSE; internode_compression =所有; key_cache_save_period = 14400; key_cache_size_in_mb = NULL; max_hint_window_in_ms = 10800000; max_hints_delivery_threads = 2; memtable_allocation_type = heap_buffers; native_transport_port = 9042; num_tokens = 16;分区= RandomPartitioner; permissions_validity_in_ms = 2000; range_request_timeout_in_ms = 10000; read_request_timeout_in_ms = 5000; request_scheduler = org.apache.cassandra.scheduler.NoScheduler; request_timeout_in_ms = 10000; row_cache_save_period = 0; row_cache_size_in_mb = 0; rpc_address = 0.0.0.0; rpc_keepalive = TRUE; rpc_port = 9160; rpc_server_type =同步; saved_caches_directory = /数据/卡桑德拉/ saved_caches; seed_provider = [{class_name = org.apache.cassandra.locator.SimpleSeedProvider,parameters = [{seeds = 10.0.1.141,10.0.2.145,10.0.3.149}]}]; server_encryption_options =; snapshot_before_compaction = FALSE; ssl_storage_port = 7001; sstable_preemptive_open_interval_in_mb = 50; start_native_transport = TRUE; start_rpc = TRUE; storage_port = 7000; thrift_framed_transport_size_in_mb = 15; tombstone_failure_threshold = 100000; tombstone_warn_threshold = 1000; trickle_fsync = FALSE; trickle_fsync_interval_in_kb = 10240; truncate_request_timeout_in_ms = 60000; write_request_timeout_in_ms = 2000]
INFO [HANDSHAKE- / 10.0.1.142] 2015-05-23 05:43:42,591 OutboundTcpConnection.java:494 - 无法与/10.0.1.142握手版
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,713 MessagingService.java:887 - 135 MUTATION消息在最后5000毫秒内被删除
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,713 StatusLogger.java:51 - 池名称活动待定已完成阻止所有时间被阻止
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,714 StatusLogger.java:66 - CounterMutationStage 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,714 StatusLogger.java:66 - ReadStage 5 1 5702809 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,715 StatusLogger.java:66 - RequestResponseStage 0 45 29528010 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,715 StatusLogger.java:66 - ReadRepairStage 0 0 997 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,715 StatusLogger.java:66 - MutationStage 0 31 43404309 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,716 StatusLogger.java:66 - GossipStage 0 0 569931 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,716 StatusLogger.java:66 - AntiEntropyStage 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,716 StatusLogger.java:66 - CacheCleanupExecutor 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,717 StatusLogger.java:66 - MigrationStage 0 0 9 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,829 StatusLogger.java:66 - ValidationExecutor 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,830 StatusLogger.java:66 - 采样器0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,830 StatusLogger.java:66 - MiscStage 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,831 StatusLogger.java:66 - CommitLogArchiver 0 0 0 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,831 StatusLogger.java:66 - MemtableFlushWriter 1 1 1756 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,831 StatusLogger.java:66 - PendingRangeCalculator 0 0 11 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,832 StatusLogger.java:66 - MemtableReclaimMemory 0 0 1756 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,832 StatusLogger.java:66 - MemtablePostFlush 1 2 3819 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,832 StatusLogger.java:66 - CompactionExecutor 2 32 742 0 0
INFO [ScheduledTasks:1] 2015-05-23 05:43:42,833 StatusLogger.java:66 - InternalResponseStage 0 0 0 0 0
INFO [HANDSHAKE- / 10.0.1.142] 2015-05-23 05:43:45,086 OutboundTcpConnection.java:485-握手版本/10.0.1.142
更新:
问题仍然存在。我想在每个节点上的一次压缩完成后节点恢复正常,但它不是。几个小时后,CPU跳转到100%,平均负载在100-300。
我正在降级回2.1.4。
更新:
使用phact的dumpThreads脚本来获取堆栈跟踪。另外,尝试使用jvmtop,但它似乎只是挂起。
输出太大而无法在此处粘贴,但您可以在http://downloads.gryphonet.com/cassandra/找到它。
用户名:cassandra 密码:cassandra
答案 0 :(得分:1)
尝试使用jvmtop来查看cassandra进程正在做什么。它有两种模式,一种是查看当前运行的线程,另一种是显示每个类程序的cpu分布(--profile),在这里粘贴两个输出
答案 1 :(得分:1)
回答我自己的问题 -
我们正在使用一个非常特定的thrift API - describe_splits_ex,这似乎导致了这个问题。 当cpu使用率达到100%时,查看所有不同线程的所有堆栈跟踪是显而易见的。 对我们来说,它很容易修复,因为我们使用这个api作为优化,而不是必须,所以我们只是停止使用它,问题就消失了。
但是,这个api也被cassandra-hadoop连接器使用(至少在早期版本中使用过),所以如果使用连接器,我会在升级到2.1.5之前进行测试。
不确定2.1.5中的哪些变化导致了这个问题,但我知道它在2.1.4中没有发生,并且在2.1.5中一直发生。