使用预定义的Redis转储创建Docker容器

时间:2019-05-10 19:55:09

标签: docker redis

我尝试用数据创建Redis docker容器。我的方法受到这个问题的启发。但这由于某些原因无法正常工作。

这是我的Dockerfile:

FROM redis

EXPOSE 6379

COPY redis-dump.csv /

RUN nohup bash -c "redis-server --appendonly yes" & sleep 5s \
    && cat /redis-dump.csv | redis-cli --pipe \
    && redis-cli shutdown save
    && ls /data

和docker-compose.yml:

version: '3.3'

volumes:
  redisdata:

services:
  redis:
    build:
      context: docker/redis
    volumes:
      - redisdata:/data
    ports:
      - "6379:6379"

创建容器时,Redis为空。当我连接到容器目录时,/data也为空。但是当泊坞窗创建时,当我看到日志时,有dump.rdbappendonly.aof个文件。转储文件在容器中。当我在容器中运行cat /redis-dump.csv | redis-cli --pipe时,Redis中就可以使用数据了。那么,问题是为什么没有db文件?

以下是创建容器的完整日志:

Creating network "restapi_default" with the default driver
Creating volume "restapi_redisdata" with default driver
Building redis
Step 1/4 : FROM redis
 ---> a55fbf438dfd
Step 2/4 : EXPOSE 6379
 ---> Using cache
 ---> 2e6e5609b5b3
Step 3/4 : COPY redis-dump.csv /
 ---> Using cache
 ---> 39330e43e72a
Step 4/4 : RUN nohup bash -c "redis-server --appendonly yes" & sleep 5s     && cat /redis-dump.csv | redis-cli --pipe     && redis-cli shutdown save     && ls /data
 ---> Running in 7e290e6a46ce
7:C 10 May 2019 19:45:32.509 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
7:C 10 May 2019 19:45:32.509 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=7, just started
7:C 10 May 2019 19:45:32.509 # Configuration loaded
7:M 10 May 2019 19:45:32.510 * Running mode=standalone, port=6379.
7:M 10 May 2019 19:45:32.510 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
7:M 10 May 2019 19:45:32.510 # Server initialized
7:M 10 May 2019 19:45:32.510 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
7:M 10 May 2019 19:45:32.510 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
7:M 10 May 2019 19:45:32.511 * Ready to accept connections
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 67600
7:M 10 May 2019 19:45:37.750 # User requested shutdown...
7:M 10 May 2019 19:45:37.750 * Calling fsync() on the AOF file.
7:M 10 May 2019 19:45:37.920 * Saving the final RDB snapshot before exiting.
7:M 10 May 2019 19:45:37.987 * DB saved on disk
7:M 10 May 2019 19:45:37.987 # Redis is now ready to exit, bye bye...
appendonly.aof
dump.rdb
Removing intermediate container 7e290e6a46ce
 ---> 1f1cd024e68f

Successfully built 1f1cd024e68f
Successfully tagged restapi_redis:latest
Creating restapi_redis_1 ... done

以下是数据示例:

SET user:id:35 85.214.132.117
SET user:id:66 85.214.132.117
SET user:id:28 85.214.132.117
SET user:id:40 85.214.132.117
SET user:id:17 85.214.132.117
SET user:id:63 85.214.132.117
SET user:id:67 85.214.132.117
SET user:id:45 85.214.132.117
SET user:id:23 85.214.132.117
SET user:id:79 85.214.132.117
SET user:id:26 85.214.132.117
SET user:id:94 85.214.132.117

1 个答案:

答案 0 :(得分:2)

您必须在启动容器之前删除该卷:

docker volume rm redisdata

然后将您的Dockerfile更改为以下内容:

FROM redis

EXPOSE 6379

COPY redis-dump.csv /

ENTRYPOINT nohup bash -c "redis-server --appendonly yes" & sleep 5s \
    && cat /redis-dump.csv | redis-cli --pipe \
    && redis-cli save \
    && redis-cli shutdown \
    && ls /data

为获得更快的结果,我建议将卷映射到本地文件夹:

version: '3.3'

services:
  redis:
    build:
      context: .
    volumes:
      - ./redisdata:/data
    ports:
      - "6379:6379"

看到它运行后,您可以切换回普通的docker卷。

现在运行:

docker-compose build
docker-compose up -d

容器将启动,并且也将正常停止,因为没有进程在运行。但是数据将出现在数据文件夹中。

通常,在使用数据库时,应在运行的容器而不是映像上进行填充。

讨论之后,我们决定使用多阶段构建:

FROM redis as import 

EXPOSE 6379 

COPY redis-dump.csv / 

RUN mkdir /mydata 

RUN nohup bash -c "redis-server --appendonly yes" & sleep 5s \ 
&& cat /redis-dump.csv | redis-cli --pipe \ 
&& redis-cli save \ 
&& redis-cli shutdown \ 
&& cp /data/* /mydata/ 

RUN ls /mydata 

FROM redis 

COPY --from=import /mydata /data 
COPY --from=import /mydata /mydata 

RUN ls /data 

CMD ["redis-server", "--appendonly", "yes"]

第一阶段(导入)与原始发布的几乎相同。由于我们注意到在最后一个RUN命令之后,/ data中的文件已删除,因此我们在另一个名为/ mydata的文件夹中进行了复制。

第二阶段使用与基本相同的映像,但是它仅从上一阶段复制所需的内容:/ mydata中的数据。它将这些数据放在/ data文件夹中,然后启动redis服务器。