从本地IDE对远程Spark群集运行

时间:2017-02-14 19:18:35

标签: hadoop apache-spark yarn kerberos cloudera-cdh

我们有一个kerberized集群,其中Spark在Yarn上运行。目前,我们在本地Scala中编写Spark代码,然后构建一个胖JAR,我们将其复制到集群,然后运行spark-submit。我宁愿在我的本地PC上编写Spark代码,让它直接针对集群运行。有一种直截了当的方法吗? Spark文档似乎没有任何这样的模式。

仅供参考,我的本地计算机正在运行Windows,并且群集正在运行CDH

2 个答案:

答案 0 :(得分:3)

虽然cricket007的答案适用于spark-submit,但这是我使用IntelliJ对远程集群进行的操作:

首先,确保客户端和服务器端的JAR相同。由于我们使用的是CDH 7.1,因此我确保所有JAR都来自特定的发行版。

按照cricket007的回答中的说明设置HADOOP_CONF_DIR和YARN_CONF_DIR。在Spark conf中适当地设置“spark.yarn.principal”和“spark.yarn.keytab”。

如果连接到HDFS,请确保在build.sbt中设置以下排除规则:

libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.6.0-cdh5.7.1" excludeAll ExclusionRule(organization = "javax.servlet")

确保build.sbt上列出了spark-launcher和spark-yarn JAR。

libraryDependencies += "org.apache.spark" %% "spark-launcher" % "1.6.0-cdh5.7.1"

libraryDependencies += "org.apache.spark" %% "spark-yarn" % "1.6.0-cdh5.7.1"

在服务器上找到CDH JAR并将它们复制到HDFS上的已知位置。将以下行添加到您的代码中:

final val CDH_JAR_PATH = "/opt/cloudera/parcels/CDH/jars"

final val hadoopJars: Seq[String] = Seq[String](
"hadoop-annotations-2.6.0-cdh5.7.1.jar"
, "hadoop-ant-2.6.0-cdh5.7.1.jar"
, "hadoop-ant-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-archive-logs-2.6.0-cdh5.7.1.jar"
, "hadoop-archives-2.6.0-cdh5.7.1.jar"
, "hadoop-auth-2.6.0-cdh5.7.1.jar"
, "hadoop-aws-2.6.0-cdh5.7.1.jar"
, "hadoop-azure-2.6.0-cdh5.7.1.jar"
, "hadoop-capacity-scheduler-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-common-2.6.0-cdh5.7.1.jar"
, "hadoop-core-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-datajoin-2.6.0-cdh5.7.1.jar"
, "hadoop-distcp-2.6.0-cdh5.7.1.jar"
, "hadoop-examples-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-examples.jar"
, "hadoop-extras-2.6.0-cdh5.7.1.jar"
, "hadoop-fairscheduler-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-gridmix-2.6.0-cdh5.7.1.jar"
, "hadoop-gridmix-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-hdfs-2.6.0-cdh5.7.1.jar"
, "hadoop-hdfs-nfs-2.6.0-cdh5.7.1.jar"
, "hadoop-kms-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-app-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-common-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-core-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-hs-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-jobclient-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-nativetask-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-shuffle-2.6.0-cdh5.7.1.jar"
, "hadoop-nfs-2.6.0-cdh5.7.1.jar"
, "hadoop-openstack-2.6.0-cdh5.7.1.jar"
, "hadoop-rumen-2.6.0-cdh5.7.1.jar"
, "hadoop-sls-2.6.0-cdh5.7.1.jar"
, "hadoop-streaming-2.6.0-cdh5.7.1.jar"
, "hadoop-streaming-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-tools-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-yarn-api-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-applications-distributedshell-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-client-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-common-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-registry-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-common-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-nodemanager-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-resourcemanager-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-web-proxy-2.6.0-cdh5.7.1.jar"
, "hbase-hadoop2-compat-1.2.0-cdh5.7.1.jar"
, "hbase-hadoop-compat-1.2.0-cdh5.7.1.jar")

final val sparkJars: Seq[String] = Seq[String](
"spark-1.6.0-cdh5.7.1-yarn-shuffle.jar",
"spark-assembly-1.6.0-cdh5.7.1-hadoop2.6.0-cdh5.7.1.jar",
"spark-avro_2.10-1.1.0-cdh5.7.1.jar",
"spark-bagel_2.10-1.6.0-cdh5.7.1.jar",
"spark-catalyst_2.10-1.6.0-cdh5.7.1.jar",
"spark-core_2.10-1.6.0-cdh5.7.1.jar",
"spark-examples-1.6.0-cdh5.7.1-hadoop2.6.0-cdh5.7.1.jar",
"spark-graphx_2.10-1.6.0-cdh5.7.1.jar",
"spark-hive_2.10-1.6.0-cdh5.7.1.jar",
"spark-launcher_2.10-1.6.0-cdh5.7.1.jar",
"spark-mllib_2.10-1.6.0-cdh5.7.1.jar",
"spark-network-common_2.10-1.6.0-cdh5.7.1.jar",
"spark-network-shuffle_2.10-1.6.0-cdh5.7.1.jar",
"spark-repl_2.10-1.6.0-cdh5.7.1.jar",
"spark-sql_2.10-1.6.0-cdh5.7.1.jar",
"spark-streaming-flume-sink_2.10-1.6.0-cdh5.7.1.jar",
"spark-streaming-flume_2.10-1.6.0-cdh5.7.1.jar",
"spark-streaming-kafka_2.10-1.6.0-cdh5.7.1.jar",
"spark-streaming_2.10-1.6.0-cdh5.7.1.jar",
"spark-unsafe_2.10-1.6.0-cdh5.7.1.jar",
"spark-yarn_2.10-1.6.0-cdh5.7.1.jar")

def getClassPath(jarNames: Seq[String], pathPrefix: String): String = {
jarNames.foldLeft("")((cp, name) => s"$cp:$pathPrefix/$name").drop(1)

}

创建SparkConf时添加以下行:

.set("spark.driver.extraClassPath", getClassPath(sparkJars ++ hadoopJars, CDH_JAR_PATH))
.set("spark.executor.extraClassPath", getClassPath(sparkJars ++ hadoopJars, CDH_JAR_PATH))
.set("spark.yarn.jars", "hdfs://$YOUR_MACHINE/PATH_TO_JARS/*")

您的计划现在应该有效。

答案 1 :(得分:1)

假设您的类路径上有正确的包(SBT,Maven等最简单的设置),您应该可以从任何地方spark-submit--master标志是真正决定作业分配方式的主要部分。需要注意的一件事是,例如,如果您的本地计算机没有通过防火墙或其他网络阻止从YARN群集中阻止。 (因为您并不希望人们在您的群集上随机运行应用程序)

在本地计算机上,您需要群集中的Hadoop配置文件并设置$SPARK_HOME/conf目录以适应某些与Hadoop相关的设置。

来自Spark on YARN页面。

  

确保HADOOP_CONF_DIRYARN_CONF_DIR指向包含Hadoop集群(客户端)配置文件的目录。这些配置用于写入HDFS并连接到YARN ResourceManager。此目录中包含的配置将分发到YARN群集,以便应用程序使用的所有容器使用相同的配置

这些值是从$SPARK_HOME/conf/spark-env.sh

设置的

由于您是Kerberized,请参阅Long Running Spark Applciations

  

对于长期运行的应用程序(如Spark Streaming作业),要写入HDFS,必须为Spark for Spark配置Kerberos身份验证,并使用{{1将Spark主体和keytab传递给spark-submit脚本}和--principal参数