我是Cassandra
中的新用户在CentOS
中安装DSE之后,我成功启动了DSE服务,但我无法启动Solr服务。启动solr时出现错误,请检查下面的错误日志。
[dba@support dse]$ bin/dse cassandra -s
Tomcat: Logging to /home/dba/tomcat
[dba@support dse]$ 18:08:21,873 |-INFO in ch.qos.logback.classic.LoggerContext[d efault] - Found resource [logback.xml] at [file:/home/Datastax/dse/resources/cas sandra/conf/logback.xml]
18:08:22,484 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
18:08:22,493 |-INFO in ReconfigureOnChangeFilter{invocationCounter=0} - Will sca n for changes in [[/home/Datastax/dse/resources/cassandra/conf/logback.xml]] eve ry 60 seconds.
18:08:22,493 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - Adding ReconfigureOnChangeFilter as a turbo filter
18:08:22,537 |-INFO in ch.qos.logback.classic.joran.action.JMXConfiguratorAction - begin
18:08:22,822 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About t o instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
18:08:22,828 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE]
18:08:22,941 |-INFO in ch.qos.logback.core.rolling.FixedWindowRollingPolicy@7787 8e70 - Will use zip compression
18:08:22,986 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] fo r [encoder] property
18:08:23,037 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - A ctive log file name: /home/Datastax/log/cassandra/system.log
18:08:23,037 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - F ile property is set to [/home/Datastax/log/cassandra/system.log]
18:08:23,039 |-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - openFile(/home/Datastax/log/cassandra/system.log,true) call failed. java.io.File NotFoundException: /home/Datastax/log/cassandra/system.log (Permission denied)
at java.io.FileNotFoundException: /home/Datastax/log/cassandra/system.lo g (Permission denied)
at at java.io.FileOutputStream.open(Native Method)
at at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at at ch.qos.logback.core.recovery.ResilientFileOutputStream.<init> (ResilientFileOutputStream.java:28)
at at ch.qos.logback.core.FileAppender.openFile(FileAppender.java:1 50)
at at ch.qos.logback.core.FileAppender.start(FileAppender.java:108)
at at ch.qos.logback.core.rolling.RollingFileAppender.start(Rolling FileAppender.java:86)
at at ch.qos.logback.core.joran.action.AppenderAction.end(AppenderA ction.java:96)
at at ch.qos.logback.core.joran.spi.Interpreter.callEndAction(Inter preter.java:317)
at at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpre ter.java:196)
at at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpre ter.java:182)
at at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.ja va:62)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(Gen ericConfigurator.java:149)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(Gen ericConfigurator.java:135)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(Gen ericConfigurator.java:99)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(Gen ericConfigurator.java:49)
at at ch.qos.logback.classic.util.ContextInitializer.configureByRes ource(ContextInitializer.java:75)
at at ch.qos.logback.classic.util.ContextInitializer.autoConfig(Con textInitializer.java:150)
at at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.jav a:85)
at at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder .java:55)
at at org.slf4j.LoggerFactory.bind(LoggerFactory.java:142)
at at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.j ava:121)
at at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java: 332)
at at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:284)
at at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:305)
at at com.datastax.bdp.server.AbstractDseModule.<clinit>(AbstractDs eModule.java:20)
18:08:23,933 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About t
INFO 12:38:25 Load of settings is done.
INFO 12:38:25 CQL slow log is enabled
INFO 12:38:25 CQL system info tables are not enabled
INFO 12:38:25 Resource level latency tracking is not enabled
INFO 12:38:25 Database summary stats are not enabled
INFO 12:38:25 Cluster summary stats are not enabled
INFO 12:38:25 Histogram data tables are not enabled
INFO 12:38:25 User level latency tracking is not enabled
INFO 12:38:25 Spark cluster info tables are not enabled
INFO 12:38:25 Loading settings from file:/home/Datastax/dse/resources/cassandr a/conf/cassandra.yaml
INFO 12:38:25 Node configuration:[authenticator=AllowAllAuthenticator; authori zer=AllowAllAuthorizer; auto_snapshot=true; batch_size_warn_threshold_in_kb=64; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_e ncryption_options=<REDACTED>; cluster_name=Cassandra Cluster; column_index_size_ in_kb=64; commit_failure_policy=stop; commitlog_directory=/home/Datastax/commitl og; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_per iod_in_ms=10000; compaction_throughput_mb_per_sec=16; concurrent_counter_writes= 32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; c ounter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_no de_timeout=false; data_file_directories=[/home/Datastax/data]; disk_failure_poli cy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ ms=600000; dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=com.datasta x.bdp.snitch.DseSimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_thrott le_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; ind ex_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_ compression=dc; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_a ddress=172.16.16.250; max_hint_window_in_ms=10800000; max_hints_delivery_threads =2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_token s=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_vali dity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5 000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeo ut_in_ms=10000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=172 .16.16.250; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_cache s_directory=/home/Datastax/saved_caches; seed_provider=[{class_name=org.apache.c assandra.locator.SimpleSeedProvider, parameters=[{seeds=172.16.16.250,202.129.19 8.236}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=fals e; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_nativ e_transport=true; start_rpc=true; storage_port=7000; thrift_framed_transport_siz e_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; t rickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout _in_ms=60000; write_request_timeout_in_ms=2000]
INFO 12:38:25 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 12:38:25 Global memtable on-heap threshold is enabled at 479MB
INFO 12:38:25 Global memtable off-heap threshold is enabled at 479MB
INFO 12:38:25 Detected search service is enabled, setting my workload to Searc h
INFO 12:38:25 Detected search service is enabled, setting my DC to Solr
INFO 12:38:25 Initialized DseDelegateSnitch with workload Search, delegating t o com.datastax.bdp.snitch.DseSimpleSnitch
INFO 12:38:26 Loading settings from file:/home/Datastax/dse/resources/cassandr a/conf/cassandra.yaml
INFO 12:38:26 Node configuration:[authenticator=AllowAllAuthenticator; authori zer=AllowAllAuthorizer; auto_snapshot=true; batch_size_warn_threshold_in_kb=64; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_e ncryption_options=<REDACTED>; cluster_name=Cassandra Cluster; column_index_size_ in_kb=64; commit_failure_policy=stop; commitlog_directory=/home/Datastax/commitl og; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_per iod_in_ms=10000; compaction_throughput_mb_per_sec=16; concurrent_counter_writes= 32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; c ounter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_no de_timeout=false; data_file_directories=[/home/Datastax/data]; disk_failure_poli cy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ ms=600000; dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=com.datasta x.bdp.snitch.DseSimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_thrott le_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; ind ex_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_ compression=dc; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_a ddress=172.16.16.250; max_hint_window_in_ms=10800000; max_hints_delivery_threads =2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_token s=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_vali dity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5 000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeo ut_in_ms=10000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=172 .16.16.250; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_cache s_directory=/home/Datastax/saved_caches; seed_provider=[{class_name=org.apache.c assandra.locator.SimpleSeedProvider, parameters=[{seeds=172.16.16.250,202.129.19 8.236}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=fals e; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_nativ e_transport=true; start_rpc=true; storage_port=7000; thrift_framed_transport_siz e_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; t rickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout _in_ms=60000; write_request_timeout_in_ms=2000]
INFO 12:38:26 Using Solr-enabled cql queries
INFO 12:38:26 CFS operations enabled
INFO 12:38:27 UserLatencyTracking plugin using 1 async writers
INFO 12:38:27 Initializing user/object io tracker plugin
INFO 12:38:27 Initializing CQL slow query log plugin
INFO 12:38:27 Solr node health tracking is not enabled
INFO 12:38:27 Solr latency snapshots are not enabled
INFO 12:38:27 Solr slow sub-query log is not enabled
INFO 12:38:27 Solr indexing error log is not enabled
INFO 12:38:27 Solr update handler metrics are not enabled
INFO 12:38:27 Solr request handler metrics are not enabled
INFO 12:38:27 Solr index statistics reporting is not enabled
INFO 12:38:27 Solr cache statistics reporting is not enabled
INFO 12:38:27 Initializing Solr slow query log plugin...
INFO 12:38:27 Initializing Solr document validation error log plugin...
INFO 12:38:27 CqlSystemInfo plugin using 1 async writers
INFO 12:38:27 ClusterSummaryStats plugin using 8 async writers
INFO 12:38:27 DbSummaryStats plugin using 8 async writers
INFO 12:38:27 HistogramDataTables plugin using 8 async writers
INFO 12:38:27 ResourceLatencyTracking plugin using 8 async writers
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 DSE version: 4.7.0
INFO 12:38:27 Hadoop version: 1.0.4.15
INFO 12:38:27 Hive version: 0.12.0.7
INFO 12:38:27 Pig version: 0.10.1
INFO 12:38:27 Solr version: 4.10.3.0.6
INFO 12:38:27 Sqoop version: 1.4.5.15.1
INFO 12:38:27 Mahout version: 0.8
INFO 12:38:27 Appender version: 3.1.0
INFO 12:38:27 Spark version: 1.2.1.2
INFO 12:38:27 Shark version: 1.1.1
INFO 12:38:27 Hive metastore version: 1
INFO 12:38:27 CQL slow log is enabled
INFO 12:38:27 CQL system info tables are not enabled
INFO 12:38:27 Resource level latency tracking is not enabled
INFO 12:38:27 Database summary stats are not enabled
INFO 12:38:27 Cluster summary stats are not enabled
INFO 12:38:27 Histogram data tables are not enabled
INFO 12:38:27 User level latency tracking is not enabled
INFO 12:38:27 Spark cluster info tables are not enabled
INFO 12:38:27 Using com.datastax.bdp.cassandra.cql3.DseQueryHandler as query h andler for native protocol queries (as requested with -Dcassandra.custom_query_h andler_class)
INFO 12:38:28 Initializing system.schema_triggers
ERROR 12:38:31 Failed managing commit log segments. Commit disk failure policy is stop; terminating thread
org.apache.cassandra.io.FSWriteError: java.io.FileNotFoundException: /home/Datas tax/commitlog/CommitLog-4-1432643911014.log (Permission denied)
有人指出我纠正这个错误的方法
答案 0 :(得分:1)
这可能是父Datastax
目录的权限问题。启动时DSE
将尝试创建日志文件( system.log ),如果在父目录上未正确设置权限,则会失败。你能提供更多关于?的信息:
- 安装方法(独立安装程序或tarball)
- DSE版本