使用nohup运行写入文件的Spark结构化流应用程序Scala代码

时间:2019-06-16 04:25:22

标签: scala apache-spark spark-structured-streaming nohup

我编写了一个火花结构化流式Scala代码,以批处理模式运行。我正在尝试使用

运行它
nohup spark2-shell -i /home/sandeep/spark_test.scala --master yarn --deploy-mode client

这是spark_test.scala文件

import org.apache.spark.sql._
import org.apache.spark.sql.types.StructType
import org.apache.spark.SparkConf

val data_schema1 = new StructType().add("a","string").add("b","string").add("c","string")
val data_schema2 = new StructType().add("d","string").add("e","string").add("f","string")

val data1 = spark.readStream.option("sep", ",").schema(data_schema1).csv("/tmp/data1/")
val data2 = spark.readStream.option("sep", ",").schema(data_schema2).csv("/tmp/data2/")

data1.createOrReplaceTempView("sample_data1")
data2.createOrReplaceTempView("sample_data2")

val df = sql("select sd1.a,sd1.b,sd2.e,sd2.f from sample_data1 sd1,sample_Data2 sd2 ON sd1.a = sd2.d")

df.writeStream.format("csv").option("format", "append").option("path", "/tmp/output").option("checkpointLocation", "/tmp/output_cp").outputMode("append").start()

即使终端关闭,我也需要应用程序在后台运行。 这是一个非常小的应用程序,我不希望使用spark Submit提交它。代码在没有nohup的情况下运行时正在运行文件,但是当我使用nohup时出现以下错误。

java.io.IOException: Bad file descriptor
        at java.io.FileInputStream.readBytes(Native Method)
        at java.io.FileInputStream.read(FileInputStream.java:229)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:229)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:246)
        at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source)
        at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
        at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
        at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
        at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
        at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
        at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
        at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
        at javax.xml.parsers.DocumentBuilder.parse(Unknown Source)
        at mypackage.MyXmlParser.parseFile(MyXmlParser.java:397)
        at mypackage.MyXmlParser.access$500(MyXmlParser.java:51)
        at mypackage.MyXmlParser$1.call(MyXmlParser.java:337)
        at mypackage.MyXmlParser$1.call(MyXmlParser.java:328)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:284)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:665)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:690)
        at java.lang.Thread.run(Thread.java:799)

1 个答案:

答案 0 :(得分:0)

在您的&末尾添加 nohup

命令末尾的

"&" symbol 指示bash在后台运行nohup mycommand。

nohup spark2-shell -i /home/sandeep/spark_test.scala --master yarn --deploy-mode client &

请参考this链接以获取有关nohup命令的更多详细信息。