Redis管道将数据存储在不同的DB中

时间:2018-08-28 16:58:04

标签: redis jedis

我们有几个使用相同Redis集群的微服务。每个人使用同一Redis集群的不同Redis数据库。我的微服务是java Spring Boot Application,我正在使用 jedis 库与Redis集群一起使用。 在我的服务中,我有一个计划的任务,该任务每4小时运行一次,并从第三方服务获取数据,并将接收到的数据临时存储到Redis缓存中。我正在使用管道通过一次调用redis服务器来存储所有数据。

在运行良好的本地环境中,但是在已部署的环境中,我们遇到了一个大问题。有时,一部分数据存储在不同的Redis DB中。一部分存储在所需的数据库中(在我的情况下为DB4),另一部分存储在可能的默认Redis DB(DB0)中。

这是我对RedisClient.java的实现

 18/08/28 12:09:52 INFO CheckpointWriter: Checkpoint for time 1535472590000 ms saved to file 'hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535472590000', took 22878 bytes and 195 ms
            18/08/28 12:09:53 INFO CheckpointWriter: Submitted checkpoint of time 1535472590000 ms to writer queue
            18/08/28 12:09:53 INFO CheckpointWriter: Saving checkpoint for time 1535472590000 ms to file 'hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535472590000'
            18/08/28 12:09:53 INFO CheckpointWriter: Deleting hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535469380000
            18/08/28 12:09:53 INFO CheckpointWriter: Checkpoint for time 1535472590000 ms saved to file 'hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535472590000', took 22876 bytes and 176 ms
            18/08/28 12:10:01 INFO CheckpointWriter: Submitted checkpoint of time 1535472600000 ms to writer queue
            18/08/28 12:10:01 INFO CheckpointWriter: Saving checkpoint for time 1535472600000 ms to file 'hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535472600000'
            18/08/28 12:10:01 INFO CheckpointWriter: Deleting hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535469390000.bk
            18/08/28 12:10:01 INFO CheckpointWriter: Checkpoint for time 1535472600000 ms saved to file 'hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535472600000', took 22942 bytes and 167 ms
            18/08/28 12:10:02 INFO CheckpointWriter: Saving checkpoint for time 1535472600000 ms to file 'hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535472600000'
            18/08/28 12:10:02 INFO CheckpointWriter: Submitted checkpoint of time 1535472600000 ms to writer queue
            18/08/28 12:10:02 INFO CheckpointWriter: Deleting hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535469390000
            18/08/28 12:10:02 INFO CheckpointWriter: Checkpoint for time 1535472600000 ms saved to file 'hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535472600000', took 22938 bytes and 178 ms
            18/08/28 12:10:12 INFO CheckpointWriter: Submitted checkpoint of time 1535472610000 ms to writer queue
            18/08/28 12:10:12 INFO CheckpointWriter: Saving checkpoint for time 1535472610000 ms to file 'hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535472610000'
            18/08/28 12:10:12 INFO CheckpointWriter: Deleting hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535469400000.bk
            18/08/28 12:10:12 INFO CheckpointWriter: Checkpoint for time 1535472610000 ms saved to file 'hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535472610000', took 23136 bytes and 212 ms
            18/08/28 12:10:12 INFO CheckpointWriter: Submitted checkpoint of time 1535472610000 ms to writer queue
            18/08/28 12:10:12 INFO CheckpointWriter: Saving checkpoint for time 1535472610000 ms to file 'hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535472610000'
            18/08/28 12:10:13 INFO CheckpointWriter: Deleting hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535469400000
            18/08/28 12:10:13 INFO CheckpointWriter: Checkpoint for time 1535472610000 ms saved to file 'hdfs://mycluster/user/user1/sparkCheckpointData/checkpoint-1535472610000', took 23136 bytes and 170 ms

这就是我用它来存储数据的方式(某些大型变体集,超过1000个项目):

private Integer timeout = 2000;
private JedisPool jedisPool;

@PostConstruct
private void init() {
    final JedisPoolConfig poolConfig = new JedisPoolConfig();
    poolConfig.setMaxTotal(15);
    poolConfig.setTestOnBorrow(true);
    poolConfig.setBlockWhenExhausted(true);
    jedisPool = new JedisPool(poolConfig, host, port, timeout);
}

@PreDestroy
public void destroy() {
    jedisPool.destroy();
}

public Pipeline getPipeline(int dbIndex) {
    Pipeline pipeline = null;
    try (Jedis jedis = jedisPool.getResource()) {
        jedis.select(dbIndex);
        pipeline = jedis.pipelined();
    } catch (Exception e) {
        log.error("failed on getPipeline, db index {}", dbIndex, e);
    }
    return pipeline;
}

这个问题怎么可能发生?我做错了什么?在Redis上存储大量数据的其他有效方法是什么。

0 个答案:

没有答案