Redis HA Helm Chart错误-NOREPLICAS没有足够好的副本来编写

时间:2019-03-26 20:33:39

标签: redis kubernetes kubernetes-helm

我正在尝试在本地kubernetes(用于Windows的docker)上设置redis-ha舵图。

我正在使用的

helm值文件是

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
  repository: redis
  tag: 5.0.3-alpine
  pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3

## Custom labels for the redis pod
labels: {}

## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  ##
  create: false
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the redis-ha.fullname template
  # name:

## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##

rbac:
  create: false

## Redis specific configuration options
redis:
  port: 6379
  masterGroupName: mymaster
  config:
    ## Additional redis conf options can be added below
    ## For all available options see http://download.redis.io/redis-stable/redis.conf
    min-slaves-to-write: 1
    min-slaves-max-lag: 5   # Value in seconds
    maxmemory: "0"       # Max memory to use for each redis instance. Default is unlimited.
    maxmemory-policy: "volatile-lru"  # Max memory policy to use for each redis instance. Default is volatile-lru.
    # Determines if scheduled RDB backups are created. Default is false.
    # Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
    save: "900 1"
    # When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
    repl-diskless-sync: "yes"
    rdbcompression: "yes"
    rdbchecksum: "yes"

  ## Custom redis.conf files used to override default settings. If this file is
  ## specified then the redis.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: 
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 700Mi
      cpu: 250m

## Sentinel specific configuration options
sentinel:
  port: 26379
  quorum: 2
  config:
    ## Additional sentinel conf options can be added below. Only options that
    ## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
    ## be properly templated.
    ## For available options see http://download.redis.io/redis-stable/sentinel.conf
    down-after-milliseconds: 10000
    ## Failover timeout value in milliseconds
    failover-timeout: 180000
    parallel-syncs: 5

  ## Custom sentinel.conf files used to override default settings. If this file is
  ## specified then the sentinel.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: 
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 200Mi
      cpu: 250m

securityContext:
  runAsUser: 1000
  fsGroup: 1000
  runAsNonRoot: true

## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}

# Prometheus exporter specific configuration options
exporter:
  enabled: false
  image: oliver006/redis_exporter
  tag: v0.31.0
  pullPolicy: IfNotPresent

  # prometheus port & scrape path
  port: 9121
  scrapePath: /metrics

  # cpu/memory resource limits/requests
  resources: {}

  # Additional args for redis exporter
  extraArgs: {}

podDisruptionBudget: {}
  # maxUnavailable: 1
  # minAvailable: 1

## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:

## Use existing secret containing "auth" key (ignores redisPassword)
# existingSecret:

persistentVolume:
  enabled: true
  ## redis-ha data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 1Gi
  annotations: {}
init:
  resources: {}

# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
  ## path is evaluated as template so placeholders are replaced
  # path: "/data/{{ .Release.Name }}"

  # if chown is true, an init-container with root permissions is launched to
  # change the owner of the hostPath folder to the user defined in the
  # security context
  chown: true

redis-ha正在正确部署,当我执行kubectl get all

NAME                       READY     STATUS    RESTARTS   AGE
pod/rc-redis-ha-server-0   2/2       Running   0          1h
pod/rc-redis-ha-server-1   2/2       Running   0          1h
pod/rc-redis-ha-server-2   2/2       Running   0          1h

NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)              AGE
service/kubernetes               ClusterIP   10.96.0.1        <none>        443/TCP              23d
service/rc-redis-ha              ClusterIP   None             <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-0   ClusterIP   10.105.187.154   <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-1   ClusterIP   10.107.36.58     <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-2   ClusterIP   10.98.38.214     <none>        6379/TCP,26379/TCP   1h

NAME                                  DESIRED   CURRENT   AGE
statefulset.apps/rc-redis-ha-server   3         3         1h

我尝试使用Java应用程序访问redis-ha,该应用程序使用生菜驱动程序连接到redis。示例Java代码访问Redis,

package io.c12.bala.lettuce;

import io.lettuce.core.RedisClient;
import io.lettuce.core.api.StatefulRedisConnection;
import io.lettuce.core.api.sync.RedisCommands;

import java.util.logging.Logger;


public class RedisClusterConnect {

    private static final Logger logger = Logger.getLogger(RedisClusterConnect.class.getName());
    public static void main(String[] args) {
        logger.info("Starting test");

        // Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId
        RedisClient redisClient = RedisClient.create("redis-sentinel://rc-redis-ha:26379/0#mymaster");
        StatefulRedisConnection<String, String> connection = redisClient.connect();


        RedisCommands<String, String> command = connection.sync();
        command.set("Hello", "World");
        logger.info("Ran set command successfully");
        logger.info("Value from Redis - " + command.get("Hello"));

        connection.close();
        redisClient.shutdown();
    }
}

我将应用程序打包为可运行的jar,创建了一个容器并将其推送到运行Redis的同一kubernetes集群。现在,应用程序将引发错误。

Exception in thread "main" io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
        at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:122)
        at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
        at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
        at com.sun.proxy.$Proxy0.set(Unknown Source)
        at io.c12.bala.lettuce.RedisClusterConnect.main(RedisClusterConnect.java:22)
Caused by: io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:108)
        at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120)
        at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111)
        at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:646)
        at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:604)
        at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:556)

我也尝试使用jedis驱动程序和springboot应用程序,从Redis-ha群集中得到了相同的错误。

**更新** 当我在redis-cli中运行info命令时,我会得到

connected_slaves:2
min_slaves_good_slaves:0

似乎奴隶行为不正常。切换为min-slaves-to-write: 0时。能够读写Redis集群。

对此有任何帮助。

3 个答案:

答案 0 :(得分:2)

似乎您必须编辑del L[myIndex] configmap并设置redis-ha-configmap

在删除所有redis pod后(应用它),它就像一个超级魅力

如此:

min-slaves-to-write 0

答案 1 :(得分:0)

当我将具有相同值的Helm Chart部署到在AWS上运行的Kubernetes集群时,它可以正常工作。

在Windows版Docker上Kubernetes似乎出现问题。

答案 2 :(得分:0)

如果在计算机上本地部署此Helm图表,则只有1个可用节点。如果您使用--set hardAntiAffinity=false安装Helm图表,则它将所需的副本容器全部放在同一节点上,因此将正确启动,并且不会出现该错误。此hardAntiAffinity值的documented默认值为true:

是否应强制Redis服务器Pod在单独的节点上运行。