如何为多个用户设置共享Spark安装(默认情况下db.lck会阻止其他用户打开)?

时间:2017-06-28 16:23:32

标签: apache-spark

我们希望学生能够以自己的用户身份启动spark-shellpyspark。但是,Derby数据库将进程锁定为另一个用户:

-rw-r--r-- 1 myuser staff   38 Jun 28 10:40 db.lck

出现这些错误:

ERROR PoolWatchThread: Error in trying to obtain a connection. Retrying in 7000ms
java.sql.SQLException: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
    at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
    at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
    at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
    at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
    at org.apache.derby.impl.jdbc.EmbedConnection.setReadOnly(Unknown Source)
    at com.jolbox.bonecp.ConnectionHandle.setReadOnly(ConnectionHandle.java:1324)
    at com.jolbox.bonecp.ConnectionHandle.<init>(ConnectionHandle.java:262)
    at com.jolbox.bonecp.PoolWatchThread.fillConnections(PoolWatchThread.java:115)
    at com.jolbox.bonecp.PoolWatchThread.run(PoolWatchThread.java:82)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)
Caused by: ERROR 25505: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
    at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
    at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
    at org.apache.derby.impl.sql.conn.GenericAuthorizer.setReadOnlyConnection(Unknown Source)
    at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.setReadOnly(Unknown Source)

这种情况是否有解决方法或最佳做法?

然后我尝试使用these instructions配置MySQL,但是会发生这种情况:

[Fatal Error] hive-site.xml:7:2: The markup in the document following the root element must be well-formed.
17/06/28 12:14:13 ERROR Configuration: error parsing conf file:/usr/local/bin/spark-2.1.1-bin-hadoop2.7/conf/hive-site.xml
org.xml.sax.SAXParseException; systemId: file:/usr/local/bin/spark-2.1.1-bin-hadoop2.7/conf/hive-site.xml; lineNumber: 7; columnNumber: 2; The markup in the document following the root element must be well-formed. 74 more
<console>:14: error: not found: value spark
       import spark.implicits._
              ^
<console>:14: error: not found: value spark
       import spark.sql
              ^

以下是XML文件的内容:

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://localhost/metastore</value>
  <description>the URL of the MySQL database</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
</property>

<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>hive</value>
</property>

<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>ourpassword</value>
</property>

<property>
  <name>datanucleus.autoCreateSchema</name>
  <value>false</value>
</property>

<property>
  <name>datanucleus.fixedDatastore</name>
  <value>true</value>
</property>

<property>
  <name>hive.metastore.uris</name>
  <value>thrift://ourip:9083</value>
  <description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>

编辑,添加开始和结束<configuration>标签后,我得到了这个:

17/06/28 12:28:50 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/28 12:28:52 WARN metastore: Failed to connect to the MetaStore Server...
17/06/28 12:28:53 WARN metastore: Failed to connect to the MetaStore Server...
17/06/28 12:28:54 WARN metastore: Failed to connect to the MetaStore Server...
17/06/28 12:28:55 WARN Hive: Failed to access metastore. This class should not accessed in runtime.
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
    at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1236)
  at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:466)
  at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
  at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
  ... 96 more
<console>:14: error: not found: value spark
       import spark.implicits._
              ^
<console>:14: error: not found: value spark
       import spark.sql
              ^

2 个答案:

答案 0 :(得分:1)

  

这种情况是否有解决方法或最佳做法?

是。让学生使用他们自己的Spark安装(不要使用共享安装,因为它什么都不买)。

毕竟Spark 只是一个库来开发用于分布式数据处理的应用程序,而你所遇到的问题是spark-shell有助于让人们在命令行上启动Spark。

问题的原因是spark-shell(默认情况下为Spark)将Derby数据库用于可供单个用户使用的目录和Hive Metastore。以不同的方式进行设置只需要为每个用户使用单独的Spark安装就可以付出更多的努力。

附注:您是否考虑过使用Databricks Cloud,以便学生甚至不关心命令行?

答案 1 :(得分:0)

Dziekuje,Jaciek,提出建议。我能够配置Derby使用MySQL。我必须使用--jars /usr/share/java/mysql-connector-java.jar选项启动它。有没有办法将选项添加到spark-shell脚本中?

我在另一个工作站上测试了它,PostgreSQL following this tip似乎也很好用。在Fedora上有点棘手,但是一旦我运行correct init commandconfigured the pg_hba.conf,它似乎不需要--jars选项。