你能不能给我任何线索为什么'无法在停止的SparkContext上调用方法'?

时间:2016-03-18 12:37:05

标签: hadoop apache-spark

当我把' val lines = sc.textFile(" hdfs:/// input")'在yarn-client中,'无法在已停止的SparkContext上调用方法'发生错误。我整天搜索了两天,但我不知道原因在哪里。 " HDFS:///输入"是的,因为当我在独立模式下执行它时,我工作得很好。

你可以给我任何想法吗? 我使用spark 1.5.2,hadoop 2.7.2。

tarting org.apache.spark.deploy.master.Master, logging to /opt/spark-1.5.2-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-master.out
192.168.111.203: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark-1.5.2-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.out
192.168.111.202: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark-1.5.2-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.out
[root@master spark-1.5.2-bin-hadoop2.6]# bin/spark-shell --master yarn-client
16/03/19 05:59:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/03/19 05:59:12 INFO spark.SecurityManager: Changing view acls to: root
16/03/19 05:59:12 INFO spark.SecurityManager: Changing modify acls to: root
16/03/19 05:59:12 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/03/19 05:59:13 INFO spark.HttpServer: Starting HTTP Server
16/03/19 05:59:13 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/03/19 05:59:13 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:46780
16/03/19 05:59:13 INFO util.Utils: Successfully started service 'HTTP class server' on port 46780.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.5.2
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_73)
Type in expressions to have them evaluated.
Type :help for more information.
16/03/19 05:59:17 INFO spark.SparkContext: Running Spark version 1.5.2
16/03/19 05:59:17 WARN spark.SparkConf: 
SPARK_JAVA_OPTS was detected (set to '-Dspark.driver.port=53411').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with conf/spark-defaults.conf to set defaults for an application
 - ./spark-submit with --driver-java-options to set -X options for a driver
 - spark.executor.extraJavaOptions to set -X options for executors
 - SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)

16/03/19 05:59:17 WARN spark.SparkConf: Setting 'spark.executor.extraJavaOptions' to '-Dspark.driver.port=53411' as a work-around.
16/03/19 05:59:17 WARN spark.SparkConf: Setting 'spark.driver.extraJavaOptions' to '-Dspark.driver.port=53411' as a work-around.
16/03/19 05:59:17 INFO spark.SecurityManager: Changing view acls to: root
16/03/19 05:59:17 INFO spark.SecurityManager: Changing modify acls to: root
16/03/19 05:59:17 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/03/19 05:59:18 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/03/19 05:59:18 INFO Remoting: Starting remoting
16/03/19 05:59:18 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.111.201:53411]
16/03/19 05:59:18 INFO util.Utils: Successfully started service 'sparkDriver' on port 53411.
16/03/19 05:59:18 INFO spark.SparkEnv: Registering MapOutputTracker
16/03/19 05:59:18 INFO spark.SparkEnv: Registering BlockManagerMaster
16/03/19 05:59:18 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-f70b1bb6-288b-4894-bb49-22d1fc3d8d89
16/03/19 05:59:18 INFO storage.MemoryStore: MemoryStore started with capacity 534.5 MB
16/03/19 05:59:18 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-58591b6b-5b19-4bc0-a993-0b846de5ef6f/httpd-fe0c46a2-1d87-4bc7-8b4f-adfc79cb762a
16/03/19 05:59:18 INFO spark.HttpServer: Starting HTTP Server
16/03/19 05:59:18 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/03/19 05:59:18 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:40258
16/03/19 05:59:18 INFO util.Utils: Successfully started service 'HTTP file server' on port 40258.
16/03/19 05:59:18 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/03/19 05:59:18 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/03/19 05:59:18 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/03/19 05:59:18 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/03/19 05:59:18 INFO ui.SparkUI: Started SparkUI at http://192.168.111.201:4040
16/03/19 05:59:19 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/03/19 05:59:19 INFO client.RMProxy: Connecting to ResourceManager at /192.168.111.201:8032
16/03/19 05:59:19 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
16/03/19 05:59:19 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
16/03/19 05:59:19 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
16/03/19 05:59:19 INFO yarn.Client: Setting up container launch context for our AM
16/03/19 05:59:19 INFO yarn.Client: Setting up the launch environment for our AM container
16/03/19 05:59:19 INFO yarn.Client: Preparing resources for our AM container
16/03/19 05:59:21 INFO yarn.Client: Uploading resource file:/opt/spark-1.5.2-bin-hadoop2.6/lib/spark-assembly-1.5.2-hadoop2.6.0.jar -> hdfs://192.168.111.201:9000/user/root/.sparkStaging/application_1458334003417_0002/spark-assembly-1.5.2-hadoop2.6.0.jar
16/03/19 05:59:25 INFO yarn.Client: Uploading resource file:/tmp/spark-58591b6b-5b19-4bc0-a993-0b846de5ef6f/__spark_conf__2052137095112870542.zip -> hdfs://192.168.111.201:9000/user/root/.sparkStaging/application_1458334003417_0002/__spark_conf__2052137095112870542.zip
16/03/19 05:59:25 INFO spark.SecurityManager: Changing view acls to: root
16/03/19 05:59:25 INFO spark.SecurityManager: Changing modify acls to: root
16/03/19 05:59:25 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/03/19 05:59:25 INFO yarn.Client: Submitting application 2 to ResourceManager
16/03/19 05:59:25 INFO impl.YarnClientImpl: Submitted application application_1458334003417_0002
16/03/19 05:59:26 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:26 INFO yarn.Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1458334765746
     final status: UNDEFINED
     tracking URL: http://master:8088/proxy/application_1458334003417_0002/
     user: root
16/03/19 05:59:27 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:28 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:29 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:30 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:31 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:32 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:33 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:34 INFO yarn.Client: Application report for application_1458334003417_0002 (state: ACCEPTED)
16/03/19 05:59:35 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka.tcp://sparkYarnAM@192.168.111.203:46505/user/YarnAM#149895142])
16/03/19 05:59:35 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> master, PROXY_URI_BASES -> http://master:8088/proxy/application_1458334003417_0002), /proxy/application_1458334003417_0002
16/03/19 05:59:35 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/03/19 05:59:35 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster has disassociated: 192.168.111.203:46505
16/03/19 05:59:35 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkYarnAM@192.168.111.203:46505] has failed, address is now gated for [5000] ms. Reason: [Disassociated] 
16/03/19 05:59:35 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster has disassociated: 192.168.111.203:46505
16/03/19 05:59:35 INFO yarn.Client: Application report for application_1458334003417_0002 (state: RUNNING)
16/03/19 05:59:35 INFO yarn.Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 192.168.111.203
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1458334765746
     final status: UNDEFINED
     tracking URL: http://master:8088/proxy/application_1458334003417_0002/
     user: root
16/03/19 05:59:35 INFO cluster.YarnClientSchedulerBackend: Application application_1458334003417_0002 has started running.
16/03/19 05:59:36 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 42938.
16/03/19 05:59:36 INFO netty.NettyBlockTransferService: Server created on 42938
16/03/19 05:59:36 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/03/19 05:59:36 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.111.201:42938 with 534.5 MB RAM, BlockManagerId(driver, 192.168.111.201, 42938)
16/03/19 05:59:36 INFO storage.BlockManagerMaster: Registered BlockManager
16/03/19 05:59:40 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka.tcp://sparkYarnAM@192.168.111.203:34633/user/YarnAM#-40449267])
16/03/19 05:59:40 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> master, PROXY_URI_BASES -> http://master:8088/proxy/application_1458334003417_0002), /proxy/application_1458334003417_0002
16/03/19 05:59:40 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/03/19 05:59:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster has disassociated: 192.168.111.203:34633
16/03/19 05:59:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster has disassociated: 192.168.111.203:34633
16/03/19 05:59:41 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkYarnAM@192.168.111.203:34633] has failed, address is now gated for [5000] ms. Reason: [Disassociated] 
16/03/19 05:59:41 ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/03/19 05:59:41 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/03/19 05:59:41 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.111.201:4040
16/03/19 05:59:41 INFO scheduler.DAGScheduler: Stopping DAGScheduler
16/03/19 05:59:41 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
16/03/19 05:59:41 INFO cluster.YarnClientSchedulerBackend: Asking each executor to shut down
16/03/19 05:59:41 INFO cluster.YarnClientSchedulerBackend: Stopped
16/03/19 05:59:42 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/03/19 05:59:42 INFO storage.MemoryStore: MemoryStore cleared
16/03/19 05:59:42 INFO storage.BlockManager: BlockManager stopped
16/03/19 05:59:42 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
16/03/19 05:59:42 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/03/19 05:59:42 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/03/19 05:59:42 INFO spark.SparkContext: Successfully stopped SparkContext
16/03/19 05:59:42 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/03/19 05:59:49 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
16/03/19 05:59:49 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
16/03/19 05:59:49 INFO hive.HiveContext: Initializing execution hive, version 1.2.1
16/03/19 05:59:49 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
16/03/19 05:59:49 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
16/03/19 05:59:50 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/03/19 05:59:50 INFO metastore.ObjectStore: ObjectStore, initialize called
16/03/19 05:59:50 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/03/19 05:59:50 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/03/19 05:59:50 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/03/19 05:59:51 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/03/19 05:59:53 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/03/19 05:59:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 05:59:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 05:59:56 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 05:59:56 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 05:59:56 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/03/19 05:59:56 INFO metastore.ObjectStore: Initialized ObjectStore
16/03/19 05:59:57 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
16/03/19 05:59:57 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
16/03/19 05:59:57 INFO metastore.HiveMetaStore: Added admin role in metastore
16/03/19 05:59:57 INFO metastore.HiveMetaStore: Added public role in metastore
16/03/19 05:59:58 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
16/03/19 05:59:58 INFO metastore.HiveMetaStore: 0: get_all_databases
16/03/19 05:59:58 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr  cmd=get_all_databases   
16/03/19 05:59:58 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
16/03/19 05:59:58 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr  cmd=get_functions: db=default pat=* 
16/03/19 05:59:58 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 05:59:58 INFO session.SessionState: Created HDFS directory: /tmp/hive/root
16/03/19 05:59:58 INFO session.SessionState: Created local directory: /tmp/root
16/03/19 05:59:58 INFO session.SessionState: Created local directory: /tmp/e16dc45f-de41-4e69-9f73-c976cc3358c9_resources
16/03/19 05:59:58 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/e16dc45f-de41-4e69-9f73-c976cc3358c9
16/03/19 05:59:58 INFO session.SessionState: Created local directory: /tmp/root/e16dc45f-de41-4e69-9f73-c976cc3358c9
16/03/19 05:59:58 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/e16dc45f-de41-4e69-9f73-c976cc3358c9/_tmp_space.db
16/03/19 05:59:58 INFO hive.HiveContext: default warehouse location is /user/hive/warehouse
16/03/19 05:59:58 INFO hive.HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
16/03/19 05:59:58 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
16/03/19 05:59:59 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
16/03/19 06:00:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/03/19 06:00:00 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/03/19 06:00:00 INFO metastore.ObjectStore: ObjectStore, initialize called
16/03/19 06:00:00 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/03/19 06:00:00 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/03/19 06:00:00 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/03/19 06:00:00 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/03/19 06:00:01 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/03/19 06:00:02 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 06:00:02 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 06:00:04 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 06:00:04 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 06:00:04 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/03/19 06:00:04 INFO metastore.ObjectStore: Initialized ObjectStore
16/03/19 06:00:04 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
16/03/19 06:00:05 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
16/03/19 06:00:05 INFO metastore.HiveMetaStore: Added admin role in metastore
16/03/19 06:00:05 INFO metastore.HiveMetaStore: Added public role in metastore
16/03/19 06:00:05 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
16/03/19 06:00:05 INFO metastore.HiveMetaStore: 0: get_all_databases
16/03/19 06:00:05 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr  cmd=get_all_databases   
16/03/19 06:00:06 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
16/03/19 06:00:06 INFO HiveMetaStore.audit: ugi=root    ip=unknown-ip-addr  cmd=get_functions: db=default pat=* 
16/03/19 06:00:06 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
16/03/19 06:00:06 INFO session.SessionState: Created local directory: /tmp/b046e212-ccbd-4415-aec3-5b207f147fda_resources
16/03/19 06:00:06 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/b046e212-ccbd-4415-aec3-5b207f147fda
16/03/19 06:00:06 INFO session.SessionState: Created local directory: /tmp/root/b046e212-ccbd-4415-aec3-5b207f147fda
16/03/19 06:00:06 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/b046e212-ccbd-4415-aec3-5b207f147fda/_tmp_space.db
16/03/19 06:00:06 INFO repl.SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext.

scala> val lines = sc.textFile("hdfs:///input")
java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext
    at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:104)
    at org.apache.spark.SparkContext.defaultParallelism(SparkContext.scala:2063)
    at org.apache.spark.SparkContext.defaultMinPartitions(SparkContext.scala:2076)
    at org.apache.spark.SparkContext.textFile$default$2(SparkContext.scala:825)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:21)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:26)
    at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:28)
    at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
    at $iwC$$iwC$$iwC$$iwC.<init>(<console>:32)
    at $iwC$$iwC$$iwC.<init>(<console>:34)
    at $iwC$$iwC.<init>(<console>:36)
    at $iwC.<init>(<console>:38)
    at <init>(<console>:40)
    at .<init>(<console>:44)
    at .<clinit>(<console>)
    at .<init>(<console>:7)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
    at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
    at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
    at org.apache.spark.repl.Main$.main(Main.scala:31)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

2 个答案:

答案 0 :(得分:6)

您的YARN应用程序在启动后立即退出:

16/03/19 05:59:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster has disassociated: 192.168.111.203:34633
16/03/19 05:59:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster has disassociated: 192.168.111.203:34633
16/03/19 05:59:41 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkYarnAM@192.168.111.203:34633] has failed, address is now gated for [5000] ms. Reason: [Disassociated] 
16/03/19 05:59:41 ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!

然后,SparkContext关闭,因此对此上下文的任何操作都将抛出您看到的异常。

检查“Application Master”日志(通过YARN的UI可见)以查看失败的原因。这可能是内存配置问题,网络问题(例如主机无​​法访问)等等 - 驱动程序端的日志(这是您粘贴的内容)不会告诉您它是哪一个。

答案 1 :(得分:4)

当我忘记包含以下内容时,在我的Spark结构化流应用程序中遇到了此问题:

spark.streams.awaitAnyTermination()