错误:-来自守护程序的错误响应:OCI运行时创建失败:container_linux.go:349:启动容器进程导致“

时间:2020-07-12 16:13:38

标签: docker

我要请个大忙。在过去的几天里,我一直陷于困境。如果有人帮助,那将是很棒的。回到问题。我已经使用以下代码(Docker-Apache spark)安装了一个docker和docker容器。


    Docker File:-
    FROM debian:stretch
    MAINTAINER Getty Images "https://github.com/gettyimages"

    RUN apt-get update \
     && apt-get install -y locales \
     && dpkg-reconfigure -f noninteractive locales \
     && locale-gen C.UTF-8 \
     && /usr/sbin/update-locale LANG=C.UTF-8 \
     && echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen \
     && locale-gen \
     && apt-get clean \
     && rm -rf /var/lib/apt/lists/*

    # Users with other locales should set this in their derivative image
    ENV LANG en_US.UTF-8
    ENV LANGUAGE en_US:en
    ENV LC_ALL en_US.UTF-8

    RUN apt-get update \
     && apt-get install -y curl unzip \
        python3 python3-setuptools \
     && ln -s /usr/bin/python3 /usr/bin/python \
     && easy_install3 pip py4j \
     && apt-get clean \
     && rm -rf /var/lib/apt/lists/*

    # http://blog.stuart.axelbrooke.com/python-3-on-spark-return-of-the-pythonhashseed
    ENV PYTHONHASHSEED 0
    ENV PYTHONIOENCODING UTF-8
    ENV PIP_DISABLE_PIP_VERSION_CHECK 1

    # JAVA
    RUN apt-get update \
     && apt-get install -y openjdk-8-jre \
     && apt-get clean \
     && rm -rf /var/lib/apt/lists/*

    # HADOOP
    ENV HADOOP_VERSION 3.0.0
    ENV HADOOP_HOME /usr/hadoop-$HADOOP_VERSION
    ENV HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
    ENV PATH $PATH:$HADOOP_HOME/bin
    RUN curl -sL --retry 3 \
      "http://archive.apache.org/dist/hadoop/common/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz" \
      | gunzip \
      | tar -x -C /usr/ \
     && rm -rf $HADOOP_HOME/share/doc \
     && chown -R root:root $HADOOP_HOME

    # SPARK
    ENV SPARK_VERSION 2.4.1
    ENV SPARK_PACKAGE spark-${SPARK_VERSION}-bin-without-hadoop
    ENV SPARK_HOME /usr/spark-${SPARK_VERSION}
    ENV SPARK_DIST_CLASSPATH="$HADOOP_HOME/etc/hadoop/*:$HADOOP_HOME/share/hadoop/common/lib/*:$HADOOP_HOME/share/hadoop/common/*:$HADOOP_HOME/share/hadoop/hdfs/*:$HADOOP_HOME/share/hadoop/hdfs/lib/*:$HADOOP_HOME/share/hadoop/hdfs/*:$HADOOP_HOME/share/hadoop/yarn/lib/*:$HADOOP_HOME/share/hadoop/yarn/*:$HADOOP_HOME/share/hadoop/mapreduce/lib/*:$HADOOP_HOME/share/hadoop/mapreduce/*:$HADOOP_HOME/share/hadoop/tools/lib/*"
    ENV PATH $PATH:${SPARK_HOME}/bin
    RUN curl -sL --retry 3 \
      "https://archive.apache.org/dist/spark/spark-${SPARK_VERSION}/${SPARK_PACKAGE}.tgz" \
      | gunzip \
      | tar x -C /usr/ \
     && mv /usr/$SPARK_PACKAGE $SPARK_HOME \
     && chown -R root:root $SPARK_HOME

    WORKDIR $SPARK_HOME
    CMD ["bin/spark-class", "org.apache.spark.deploy.master.Master"]

命令:

ubuntu@ip-123.43.11.136:~$ sudo docker run -it --rm  -v $(pwd):/home/ubuntu sparkimage /home/ubuntu bin/spark-submit ./count.py

下面出现错误

Error :-  Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/home/ubuntu\": permission denied": unknown.

可以帮我个问题吗?我已经浏览了几个链接,但是仍然没有运气无法解决问题。

ERRO[0001] error waiting for the container: context cancelled

2 个答案:

答案 0 :(得分:0)

映像sparkimage之后的所有操作都被视为Docker Entrypoint的参数。

例如

Entrypoint ["node"]

所以当你开始

docker run -it my_image app.js 

现在,app.js将是node的参数,app.js将以node app.js开始,而码头工人将把它们视为docker run

由于您的Dockefile中没有入口点,因此您正在CMD ["/home/ubuntu bin/spark-submit ./count.py"] 命令中将无效选项传递给映像,

/home/ubuntu

这是其拒绝Etnrypoint ["bin/spark-class", "org.apache.spark.deploy.master.Master"] 权限的抛出错误。

您可以尝试这两种组合。

sudo docker run -it --rm  -v $(pwd):/home/ubuntu sparkimage /home/ubuntu bin/spark-submit ./count.py

使用run命令。

CMD ["bin/spark-class", "org.apache.spark.deploy.master.Master","/home/ubuntu bin/spark-submit ./count.py"] 

OR

sudo docker run -it --rm  -v $(pwd):/home/ubuntu sparkimage 

使用docker run命令

final curTitle = new TextEditingController();

答案 1 :(得分:0)

问题已解决。正确的正确安装路径并执行,并且工作正常,没有任何问题。