dcos Spark安装失败

时间:2017-12-27 04:52:45

标签: apache-spark dcos

我在centos 7.3上有一个dcos 1.10群集,其中hdfs已成功安装。当我想在Web UI中通过目录安装Spark时,遇到与this相同的问题,这可以通过将用户名从nobody更改为root来解决。现在,权限被拒绝的问题已经消失,但我仍然无法使用Spark日志显示

来运行Spark
I1227 12:44:50.796188 16423 exec.cpp:162] Version: 1.4.0
I1227 12:44:50.806447 16426 exec.cpp:237] Executor registered on agent 27915a1f-052e-47b7-8db1-4ee24fe1a16b-S0
I1227 12:44:50.807636 16426 executor.cpp:120] Registered docker executor on 192.168.200.116
I1227 12:44:50.807708 16426 executor.cpp:160] Starting task spark.aaf1e752-eac0-11e7-85aa-da29fd6579a8
+ export DISPATCHER_PORT=6211
+ DISPATCHER_PORT=6211
+ export DISPATCHER_UI_PORT=6212
+ DISPATCHER_UI_PORT=6212
+ export SPARK_PROXY_PORT=6213
+ SPARK_PROXY_PORT=6213
+ SCHEME=http
+ OTHER_SCHEME=https
+ [[ '' == true ]]
+ export DISPATCHER_UI_WEB_PROXY_BASE=/service/spark
+ DISPATCHER_UI_WEB_PROXY_BASE=/service/spark
+ grep -v '#https#' /etc/nginx/conf.d/spark.conf.template
+ sed s,#http#,,
+ sed -i 's,<PORT>,6213,' /etc/nginx/conf.d/spark.conf
+ sed -i 's,<DISPATCHER_URL>,http://192.168.200.116:6211,' /etc/nginx/conf.d/spark.conf
+ sed -i 's,<DISPATCHER_UI_URL>,http://192.168.200.116:6212,' /etc/nginx/conf.d/spark.conf
+ sed -i 's,<PROTOCOL>,,' /etc/nginx/conf.d/spark.conf
+ [[ '' == true ]]
+ [[ -f hdfs-site.xml ]]
+ [[ -n '' ]]
+ exec runsvdir -P /etc/service
+ mkdir -p /mnt/mesos/sandbox/nginx
+ exec
+ mkdir -p /mnt/mesos/sandbox/spark
+ exec svlogd /mnt/mesos/sandbox/spark
+ exec svlogd /mnt/mesos/sandbox/nginx
+ exec
... (repeats for about a hundred time)
+ exec
+ exec
I1227 12:48:53.038286 16427 executor.cpp:269] Received killTask for task spark.aaf1e752-eac0-11e7-85aa-da29fd6579a8
+ exec
+ exec
+ exec
+ exec
+ exec
+ exec
+ exec
I1227 12:49:03.042243 16427 executor.cpp:269] Received killTask for task spark.aaf1e752-eac0-11e7-85aa-da29fd6579a8
+ exec
+ exec
+ exec
+ exec
+ exec
W1227 12:49:03.042243 16423 logging.cpp:91] RAW: Received signal SIGTERM from process 4581 of user 0; exiting

这是选项文件:

{
  "id": "/spark",
  "backoffFactor": 1.15,
  "backoffSeconds": 1,
  "cmd": "/sbin/init.sh",
  "container": {
    "type": "DOCKER",
    "volumes": [],
    "docker": {
      "image": "mesosphere/spark:2.1.0-2.2.0-1-hadoop-2.6",
      "forcePullImage": true,
      "privileged": false,
      "parameters": [
        {
          "key": "user",
          "value": "root"
        }
      ]
    }
  },
  "cpus": 1,
  "disk": 0,
  "env": {
    "DCOS_SERVICE_NAME": "spark",
    "NO_BOOTSTRAP": "true",
    "SPARK_DISPATCHER_MESOS_ROLE": "*",
    "SPARK_USER": "root",
    "SPARK_LOG_LEVEL": "INFO"
  },
  "healthChecks": [
    {
      "gracePeriodSeconds": 5,
      "ignoreHttp1xx": false,
      "intervalSeconds": 60,
      "maxConsecutiveFailures": 3,
      "portIndex": 2,
      "timeoutSeconds": 10,
      "delaySeconds": 15,
      "protocol": "HTTP",
      "path": "/"
    }
  ],
  "instances": 1,
  "labels": {
    "DCOS_PACKAGE_OPTIONS": "xxx",
    "DCOS_SERVICE_SCHEME": "http",
    "DCOS_PACKAGE_SOURCE": "https://universe.mesosphere.com/repo",
    "DCOS_PACKAGE_METADATA": "xx",
    "DCOS_PACKAGE_VERSION": "2.1.0-2.2.0-1",
    "DCOS_PACKAGE_NAME": "spark"
  },
  "maxLaunchDelaySeconds": 3600,
  "mem": 1024,
  "gpus": 0,
  "networks": [
    {
      "mode": "host"
    }
  ],
  "portDefinitions": [
    {
      "protocol": "tcp",
      "port": 10001
    },
    {
      "protocol": "tcp",
      "port": 10002
    },
    {
      "protocol": "tcp",
      "port": 10003
    }
  ],
  "requirePorts": false,
  "upgradeStrategy": {
    "maximumOverCapacity": 1,
    "minimumHealthCapacity": 1
  },
  "user": "root",
  "killSelection": "YOUNGEST_FIRST",
  "unreachableStrategy": {
    "inactiveAfterSeconds": 0,
    "expungeAfterSeconds": 0
  },
  "fetch": [],
  "constraints": []
}

Spark服务在Web UI中也显示为Unhealth。从Marathon查看时,会显示Task was killed since health check failed. Reason: 502 Bad Gateway

知道怎么解决吗?

0 个答案:

没有答案