所以我正在尝试创建一个本地的spark jobserver实例来测试作业,我甚至无法让它运行。
所以当我进入我的流浪者实例时,我做的第一件事是我开始火花。我知道这是有效的,因为我提交的作业是用它提供的提交作业实用程序来激发的。然后我去我当地的spark-jobserver克隆并运行
vagrant@cassandra-spark:~/spark-jobserver$ sudo sbt
[info] Loading project definition from /home/vagrant/spark-jobserver/project
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
[info] Set current project to root (in build file:/home/vagrant/spark-jobserver/)
> reStart /home/vagrant/spark-jobserver/config/local.conf
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 21 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 35 ms
[success] created output: /home/vagrant/spark-jobserver/job-server/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 6 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 6 ms
[success] created output: /home/vagrant/spark-jobserver/job-server-extras/target
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[warn] Credentials file /root/.bintray/.credentials does not exist
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 3 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 8 ms
[success] created output: /home/vagrant/spark-jobserver/job-server-api/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 11 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 7 ms
[success] created output: /home/vagrant/spark-jobserver/akka-app/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 3 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 9 ms
[success] created output: /home/vagrant/spark-jobserver/job-server-api/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 11 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 6 ms
[success] created output: /home/vagrant/spark-jobserver/akka-app/target
[info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml
[info] Processed 21 file(s)
[info] Found 0 errors
[info] Found 0 warnings
[info] Found 0 infos
[info] Finished in 2 ms
[success] created output: /home/vagrant/spark-jobserver/job-server/target
[info] Application job-server not yet started
[info] Starting application job-server in the background ...
job-server Starting spark.jobserver.JobServer.main(/home/vagrant/spark-jobserver/config/local.conf)
job-server[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[warn] No main class detected
[info] Application job-server-extras not yet started
[info] Starting application job-server-extras in the background ...
job-server-extras Starting spark.jobserver.JobServer.main(/home/vagrant/spark-jobserver/config/local.conf)
job-server-extras[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[success] Total time: 6 s, completed Jun 12, 2015 2:28:32 PM
> job-server-extras[ERROR] log4j:WARN No appenders could be found for logger (spark.jobserver.JobServer$).
job-server-extras[ERROR] log4j:WARN Please initialize the log4j system properly.
job-server-extras[ERROR] log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
>
在另一个终端我进入vagrant实例并运行
vagrant@cassandra-spark:~$ curl --data-binary @/home/vagrant/SQLJob/target/scala-2.10/CassSparkTest-a
ssembly-1.0.jar localhost:8090/jars
The requested resource could not be found.
这是我的config / local.conf
中的内容 # Template for a Spark Job Server configuration file
# When deployed these settings are loaded when job server starts
#
# Spark Cluster / Job Server configuration
spark {
# spark.master will be passed to each job's JobContext
master = "spark://192.168.10.11:7077"
# master = "mesos://vm28-hulk-pub:5050"
# master = "yarn-client"
# Default # of CPUs for jobs to use for Spark standalone cluster
job-number-cpus = 1
# predefined Spark contexts
# contexts {
# my-low-latency-context {
# num-cpu-cores = 1 # Number of cores to allocate. Required.
# memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, 1G, etc.
# }
# # define additional contexts here
# }
# universal context configuration. These settings can be overridden, see README.md
context-settings {
num-cpu-cores = 1 # Number of cores to allocate. Required.
memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, #1G, etc.
spark.cassandra.connection.host = "127.0.0.1"
# in case spark distribution should be accessed from HDFS (as opposed to being installed on every mesos slave)
# spark.executor.uri = "hdfs://namenode:8020/apps/spark/spark.tgz"
# uris of jars to be loaded into the classpath for this context. Uris is a string list, or a string separated by commas ','
dependent-jar-uris = ["file:///home/vagrant/lib/spark-cassandra-connector-assembly-1.3.0-M2-SNAPSHOT.jar"]
# If you wish to pass any settings directly to the sparkConf as-is, add them here in passthrough,
# such as hadoop connection settings that don't use the "spark." prefix
passthrough {
#es.nodes = "192.1.1.1"
}
}
# This needs to match SPARK_HOME for cluster SparkContexts to be created successfully
home = "/home/vagrant/spark"
}
# Note that you can use this file to define settings not only for job server,
# but for your Spark jobs as well. Spark job configuration merges with this configuration file as defaults.
答案 0 :(得分:0)
找出问题所在,服务器正确启动(虽然没有正确记录)
问题在于我没有" /"在传递给curl的路径的末尾
所以要修复它会将curl语句更改为:
vagrant@cassandra-spark:~$ curl --data-binary @/home/vagrant/SQLJob/target/scala-2.10/CassSparkTest-a
ssembly-1.0.jar localhost:8090/jars