我正在尝试使用单个节点集群创建一个cassandra数据库(我认为),但无论我设置复制因素的任何值,我都会收到此错误:
me.prettyprint.hector.api.exceptions.HUnavailableException ::可能没有足够的副本来处理一致性级别。
这是我的代码:
public static String[]getSerializedClusterMap(){
Cluster cluster=HFactory.getOrCreateCluster("TestCluster", "localhost:9160");
// Keyspace keyspace=HFactory.createKeyspace("KMeans", cluster);
KeyspaceDefinition keyspaceDefinition=cluster.describeKeyspace("myKeyspace");
if (cluster.describeKeyspace("myKeyspace")==null){
ColumnFamilyDefinition columnFamilyDefinition=HFactory.createColumnFamilyDefinition("myKeyspace","clusters",ComparatorType.BYTESTYPE);
KeyspaceDefinition keyspaceDefinition1=HFactory.createKeyspaceDefinition("myKeyspace",ThriftKsDef.DEF_STRATEGY_CLASS,1,Arrays.asList(columnFamilyDefinition));
cluster.addKeyspace(keyspaceDefinition1,true);
}
Keyspace keyspace=HFactory.createKeyspace("myKeyspace", cluster);
Mutator<String>mutator=HFactory.createMutator(keyspace, me.prettyprint.cassandra.serializers.StringSerializer.get());
String[]serializedMap=new String[2],clusters={"cluster-0","cluster-1"};
try{
me.prettyprint.hector.api.query.ColumnQuery<String,String,String> columnQuery=HFactory.createStringColumnQuery(keyspace);
for(int i=0;i<clusters.length;i++){
columnQuery.setColumnFamily("user").setKey("cluster").setName(clusters[i]);
QueryResult<HColumn<String,String>>result=columnQuery.execute();
serializedMap[i]=result.get().getValue();
}
}catch (HectorException ex){
ex.printStackTrace();
}
return serializedMap;
}
有关我应该做什么或复制因素应该是什么价值的任何建议?
跑完后,'使用keypace“myKeyspace;'和'describe;',输出是:
Keyspace: myKeyspace:
Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
Durable Writes: true
Options: [replication_factor:3]
Column Families:
ColumnFamily: user
Key Validation Class: org.apache.cassandra.db.marshal.BytesType
Default column value validator: org.apache.cassandra.db.marshal.BytesType
Cells sorted by: org.apache.cassandra.db.marshal.BytesType
GC grace seconds: 864000
Compaction min/max thresholds: 4/32
Read repair chance: 1.0
DC Local Read repair chance: 0.0
Populate IO Cache on flush: false
Replicate on write: true
Caching: KEYS_ONLY
Bloom Filter FP chance: default
Built indexes: []
Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor
答案 0 :(得分:1)
您的密钥空间配置RF为3
选项:[replication_factor:3]
在1节点群集上,由于需要至少2,因此无法达到法定数量。将您的rf更改为1或使用一致性级别。