使用Spark Magic的Jypyter中的Livy pyspark Python会话错误 - 错误repl.PythonInterpreter:进程已死亡1

时间:2017-02-10 13:13:46

标签: apache-spark pyspark jupyter livy

我正在运行spark v2.0.0 YARN群集。我喜欢在Spark大师旁边跑。

我已经设置了一个jupyter Python3 notetebook并安装了Spark Magichave followed the nessesary instructions to connect Spark Magic to Livy虽然当我创建会话时,我从笔记本中收到一条错误消息。

Added endpoint http://spark-master:8998
Starting Spark application

ID  YARN Application ID Kind    State   Spark UI    Driver log  Current session?
0   None    pyspark idle            ✔
---------------------------------------------------------------------------
LivyUnexpectedStatusException             Traceback (most recent call last)
/opt/conda/lib/python3.5/site-packages/hdijupyterutils/ipywidgetfactory.py in submit_clicked(self, button)
     63 
     64     def submit_clicked(self, button):
---> 65         self.parent_widget.run()

/opt/conda/lib/python3.5/site-packages/sparkmagic/controllerwidget/createsessionwidget.py in run(self)
     56 
     57         try:
---> 58             self.spark_controller.add_session(alias, endpoint, skip, properties)
     59         except ValueError as e:
     60             self.ipython_display.send_error("""Could not add session with

/opt/conda/lib/python3.5/site-packages/sparkmagic/livyclientlib/sparkcontroller.py in add_session(self, name, endpoint, skip_if_exists, properties)
     79         session = self._livy_session(http_client, properties, self.ipython_display)
     80         self.session_manager.add_session(name, session)
---> 81         session.start()
     82 
     83     def get_session_id_for_client(self, name):

/opt/conda/lib/python3.5/site-packages/sparkmagic/livyclientlib/livysession.py in start(self)
    148             else:
    149                 command = Command("sqlContext")
--> 150                 (success, out) = command.execute(self)
    151                 if success:
    152                     self.ipython_display.writeln(u"SparkContext available as 'sc'.")

/opt/conda/lib/python3.5/site-packages/sparkmagic/livyclientlib/command.py in execute(self, session)
     29         statement_id = -1
     30         try:
---> 31             session.wait_for_idle()
     32             data = {u"code": self.code}
     33             response = session.http_client.post_statement(session.id, data)

/opt/conda/lib/python3.5/site-packages/sparkmagic/livyclientlib/livysession.py in wait_for_idle(self, seconds_to_wait)
    238                     .format(self.id, self.status)
    239                 self.logger.error(error)
--> 240                 raise LivyUnexpectedStatusException(u'{} See logs:\n{}'.format(error, self.get_logs()))
    241 
    242             if seconds_to_wait <= 0.0:

LivyUnexpectedStatusException: Session 0 unexpectedly reached final status 'error'. See logs:

错误我在jupyter的管理火花部分创建新会话时从Livy日志获取

17/02/10 13:06:08 INFO StateStore$: Using BlackholeStateStore for recovery.
17/02/10 13:06:08 INFO BatchSessionManager: Recovered 0 batch sessions. Next session id: 0
17/02/10 13:06:08 INFO InteractiveSessionManager: Recovered 0 interactive sessions. Next session id: 0
17/02/10 13:06:08 INFO InteractiveSessionManager: Heartbeat watchdog thread started.
17/02/10 13:06:08 INFO WebServer: Starting server on http://spark-master:8998
17/02/10 13:06:34 INFO InteractiveSession$: Creating LivyClient for sessionId: 0
17/02/10 13:06:34 WARN RSCConf: Your hostname, spark-master, resolves to a loopback address, but we couldn't find any external IP address!
17/02/10 13:06:34 WARN RSCConf: Set livy.rsc.rpc.server.address if you need to bind to another address.
17/02/10 13:06:35 INFO InteractiveSessionManager: Registering new session 0
17/02/10 13:06:35 INFO ContextLauncher: 17/02/10 13:06:35 INFO driver.RSCDriver: Starting RPC server...
17/02/10 13:06:35 INFO ContextLauncher: 17/02/10 13:06:35 WARN rsc.RSCConf: Set livy.rsc.rpc.server.address if you need to bind to another address.
17/02/10 13:06:35 INFO ContextLauncher: 17/02/10 13:06:35 INFO driver.RSCDriver: Received job request 3ca8a52b-8dd5-41f0-8151-a8201d72d422
17/02/10 13:06:35 INFO ContextLauncher: 17/02/10 13:06:35 INFO driver.RSCDriver: SparkContext not yet up, queueing job request.
17/02/10 13:06:36 INFO ContextLauncher: Setting default log level to "WARN".
17/02/10 13:06:36 INFO ContextLauncher: To adjust logging level use sc.setLogLevel(newLevel).
17/02/10 13:06:36 INFO ContextLauncher: 17/02/10 13:06:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/02/10 13:06:37 INFO ContextLauncher: 17/02/10 13:06:37 ERROR repl.PythonInterpreter: Process has died with 1
17/02/10 13:06:37 INFO RSCClient: Received result for 3ca8a52b-8dd5-41f0-8151-a8201d72d422

并在livy logs中获取此输出

我无法指出确切的问题/修复方法。如果我将会话设置为使用Scala语言而不是Python,我就能够创建成功的连接。虽然如果我将会话语言设置为python,我只会收到错误。如果有人知道在Jupyter连接livy-repl pyspark会话的解决方案,请告诉我!

更新

Livy仍未能创建PySpark会话。

curl -v -X POST --data '{"kind": "pyspark"}' -H "Content-Type: application/json" example.com/sessions

会话状态将直接从“开始”变为“失败”。 YARN登录资源管理器在livy会话失败之前给出以下权利。

To adjust logging level use sc.setLogLevel(newLevel).
17/02/26 05:02:25 WARN rsc.RSCConf: Your hostname, yarn-slave1, resolves to a loopback address, but we couldn't find any external IP address!
17/02/26 05:02:25 WARN rsc.RSCConf: Set livy.rsc.rpc.server.address if you need to bind to another address.
17/02/26 05:02:32 ERROR repl.PythonInterpreter: Process has died with 1
17/02/26 05:02:33 WARN yarn.YarnAllocator: Container marked as failed: container_1488085279373_0001_01_000002 on host: yarn-slave1. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488085279373_0001_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
    at org.apache.hadoop.util.Shell.run(Shell.java:479)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/02/26 05:02:33 WARN yarn.ApplicationMaster: Reporter thread fails 1 time(s) in a row.
java.lang.IllegalStateException: RpcEnv already stopped.
    at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159)
    at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
    at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:185)
    at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:508)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:531)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:512)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:512)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:442)
    at scala.collection.Iterator$class.foreach(Iterator.scala:742)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at org.apache.spark.deploy.yarn.YarnAllocator.processCompletedContainers(YarnAllocator.scala:442)
    at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:242)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:372)
17/02/26 05:02:40 WARN yarn.YarnAllocator: Container marked as failed: container_1488085279373_0001_01_000005 on host: yarn-slave1. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488085279373_0001_01_000005
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
    at org.apache.hadoop.util.Shell.run(Shell.java:479)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/02/26 05:02:40 WARN yarn.ApplicationMaster: Reporter thread fails 1 time(s) in a row.
java.lang.IllegalStateException: RpcEnv already stopped.
    at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159)
    at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
    at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:185)
    at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:508)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:531)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:512)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:512)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:442)
    at scala.collection.Iterator$class.foreach(Iterator.scala:742)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at org.apache.spark.deploy.yarn.YarnAllocator.processCompletedContainers(YarnAllocator.scala:442)
    at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:242)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:372)
17/02/26 05:02:47 WARN yarn.YarnAllocator: Container marked as failed: container_1488085279373_0001_01_000006 on host: yarn-slave1. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488085279373_0001_01_000006
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
    at org.apache.hadoop.util.Shell.run(Shell.java:479)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/02/26 05:02:47 WARN yarn.ApplicationMaster: Reporter thread fails 1 time(s) in a row.
java.lang.IllegalStateException: RpcEnv already stopped.
    at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159)
    at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
    at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:185)
    at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:508)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:531)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:512)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:512)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:442)
    at scala.collection.Iterator$class.foreach(Iterator.scala:742)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at org.apache.spark.deploy.yarn.YarnAllocator.processCompletedContainers(YarnAllocator.scala:442)
    at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:242)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:372)
17/02/26 05:02:53 WARN yarn.YarnAllocator: Container marked as failed: container_1488085279373_0001_01_000007 on host: yarn-slave1. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1488085279373_0001_01_000007
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
    at org.apache.hadoop.util.Shell.run(Shell.java:479)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
17/02/26 05:02:53 WARN yarn.ApplicationMaster: Reporter thread fails 1 time(s) in a row.
java.lang.IllegalStateException: RpcEnv already stopped.
    at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159)
    at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
    at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:185)
    at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:508)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:531)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1$$anonfun$apply$7.apply(YarnAllocator.scala:512)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:512)
    at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$processCompletedContainers$1.apply(YarnAllocator.scala:442)
    at scala.collection.Iterator$class.foreach(Iterator.scala:742)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at org.apache.spark.deploy.yarn.YarnAllocator.processCompletedContainers(YarnAllocator.scala:442)
    at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:242)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:372)

火花defaults.conf

spark.yarn.appMasterEnv.PYSPARK_PYTHON python2

芯-site.xml中

<property>
  <name>hadoop.proxyuser.livy.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.livy.hosts</name>
  <value>*</value>
</property>

livy.conf

livy.server.host = 0.0.0.0
livy.server.port = 8998
livy.spark.master = yarn
livy.spark.deployMode = cluster

3 个答案:

答案 0 :(得分:4)

我能够重现这个问题。

问题似乎是spark 2.0.0和livy有不兼容的pyspark版本。如果您将spark更新为最新版本(当前为2.1.0),则pyspark版本可以进行通信,并且可以毫不费力地创建spark会话。

答案 1 :(得分:1)

即使使用spark 2.1.1和livy,我也遇到了类似的问题。 Livy会话状态转为&#34;错误&#34;来自&#34;开始&#34;。原来我使用Java-7而Livy和Spark需要Java-8。解决了我的问题。

答案 2 :(得分:0)

我面临着类似的问题。原来是罪魁祸首是藏青版。当用apache livy-0.6.0-incubating版本替换cloudera livy时,问题解决了;并且我能够在livy上创建pyspark类会话。