创建Docker镜像时出现Max Depth Exceeded错误

时间:2017-02-06 06:24:51

标签: docker dockerfile

我正在尝试通过扩展sequenceiq / hadoop-docker:2.7.1来创建Docker镜像。我的Dockerfile共有73层。但是在执行最后一层时,它会抛出一个错误:

max depth exceeded

我尝试在系统中再次重新安装Docker,并且还从docker文件夹中删除了以前创建的图像和容器。但错误似乎仍然存在。

更新: DockerFile:

FROM centos:6
MAINTAINER user1

RUN yum -y install sudo

######## JDK8 ######## 

RUN echo "y" | sudo yum remove jdk.x86_64
ADD jars/jdk-default-linux-x64.tar.gz /usr/share
WORKDIR /usr/share/
RUN mv jdk* jdk-default-linux-x64/
ENV JAVA_HOME /usr/share/jdk-default-linux-x64/


######## TOMCAT ######## 

ADD /jars/apache-tomcat-default.tar.gz /usr/share/
WORKDIR /usr/share/
RUN mv  apache-tomcat-* tomcat7
RUN echo "JAVA_HOME=/usr/share/jdk-default-linux-x64/" >> /etc/default/tomcat7
RUN groupadd tomcat
RUN useradd -s /bin/bash -g tomcat tomcat
RUN chown -Rf tomcat.tomcat /usr/share/tomcat7
EXPOSE 8080

######## MYSQL ######## 

RUN echo "y" | sudo yum install mysql-server.x86_64

######## ZOOKEEPER ######## 

ADD /jars/zookeeper-default.tar.gz /usr/share
RUN mv /usr/share/zookeeper-* /usr/share/zookeeper-default
RUN mkdir /usr/share/zookeeper-default/data
ADD /conf/zoo.cfg /usr/share/zookeeper-default/conf/
EXPOSE 2181

######## RR-INSTALLER ######## 

RUN mkdir /root/RR
ADD /jars/dds.zip /root/RR
RUN chmod 777 /root/RR/dds.zip
WORKDIR /root/RR/
RUN unzip dds.zip
ADD /conf/inst.properties /root/RR/installer/config/
RUN rm dds.zip 

######## RR-CLIENT ######## 

ADD /jars/ed.war /usr/share/tomcat7/webapps/
RUN mkdir /usr/share/tomcat7/webapps/val-backup/
ADD /jars/va11.Final.jar /usr/share/tomcat7/webapps/val-backup/
ADD /conf/cp.properties /usr/share/tomcat7/webapps/val-backup/


######## HIVE ######## 

ADD /jars/apache-hive-default-bin.tar.gz /usr/share/
RUN mv /usr/share/apache-hive-* /usr/share/apache-hive-default-bin/
ADD /conf/hive-site_spark.xml /usr/share/apache-hive-default-bin/conf/
RUN mv /usr/share/apache-hive-default-bin/conf/hive-site_spark.xml /usr/share/apache-hive-default-bin/conf/hive-site.xml

######## SPARK & SCALA ######## 

ADD /jars/spark-default-bin-hadoop-default.tgz /usr/share/
RUN mv /usr/share/spark-* /usr/share/spark-default-bin-hadoop-default/
ADD /conf/hive-site_spark.xml /usr/share/spark-default-bin-hadoop-default/conf/
RUN mv /usr/share/spark-default-bin-hadoop-default/conf/hive-site_spark.xml /usr/share/spark-default-bin-hadoop-default/conf/hive-site.xml

ADD /jars/scala-default.tgz /usr/share/
RUN mv /usr/share/scala-* /usr/share/scala-default/

######## APACHE IGNITE ########

ADD /jars/apache-ignite-fabric-default-bin.zip /usr/share/
WORKDIR /usr/share/
RUN unzip apache-ignite-fabric-default-bin.zip
RUN rm apache-ignite-fabric-default-bin.zip
RUN mv apache-ignite-* apache-ignite-fabric-default-bin/

######## APACHE FLUME ########
ADD /jars/apache-flume-default-bin.tar.gz /usr/share/
RUN mv /usr/share/apache-flume* /usr/share/apache-flume-default-bin/
ADD /conf/flume-env.sh /usr/share/apache-flume-default-bin/conf/

######## APACHE KAFKA ########
ADD /jars/kafka_default.tgz /usr/share/
RUN mv /usr/share/kafka_* /usr/share/kafka_default/ 


######## ROC-SERVER ########
ADD /jars/rsb4.zip /usr/share
WORKDIR /usr/share/
RUN unzip rsb4.zip
ADD /conf/cs.properties /usr/share/bootstrap/conf/


######## CONFIGURATION ######## 
ADD /jars/stc.jar /etc/
ADD bs.sh /etc/
ADD RI.sh /etc/
RUN chmod 777 /etc/bs.sh
ADD /jars/rm.zip /usr/share/
WORKDIR /usr/share/
RUN unzip rm.zip
ADD /conf/cs.properties rssd/conf/
RUN chmod 777 /etc/RI.sh
WORKDIR /etc/
RUN ./RI.sh

ADD /jars/rm.jar /usr/RR/
ADD /conf/main.env /etc/

######## SOLR ########
ADD /jars/solr-default.tgz /usr/share/
RUN mv /usr/share/solr* /usr/share/solr-default/
EXPOSE 8983

我尝试使用以下命令使用docker Squash:

docker save 49b5a7a88d5 | sudo docker-squash -t squash -verbose | docker load

但它似乎不起作用。

针对此问题的任何解决方案?

0 个答案:

没有答案