尝试保存pyspark数据框时发生错误

时间:2018-08-03 15:55:57

标签: apache-spark pyspark apache-spark-sql jupyter

我有一个使用pyspark的python脚本,通过jupyter完成后可以正常运行。当使用spark-submit运行时,由于某些原因,尝试使用行

保存结果时会崩溃
df.write.format('jdbc').options(
    url='jdbc:mysql://{0}/{1}?useServerPrepStmts=false&rewriteBatchedStatements=true'.format(\
        output_server, output_db),\
    driver='com.mysql.jdbc.Driver',\
    dbtable=output_table,\
    user='user',\
    password='xxxx').mode('overwrite').save()

错误为:

Traceback (most recent call last):
  File "/opt/spark-2.1.0-bin-hadoop2.7/sbin/test.py", line 381, in <module>
    password='xxxx').mode('overwrite').save()
  File "/opt/spark-2.1.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 548, in save
  File "/opt/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/opt/spark-2.1.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
    if records_acum:
  File "/opt/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o55.save.
: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:38)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:78)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:78)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:78)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:53)
    at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:426)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)

如果我尝试使用

来运行它
/opt/Spark/spark-2.2.0_hadoop-2.7/bin/spark-submit --packages mysql:mysql-connector-java:5.1.40 test.py

然后避免崩溃,但脚本永远不会结束,只是挂在同一df.save行上。如果不清楚,我想运行脚本以完成操作,并成功保存数据。

2 个答案:

答案 0 :(得分:0)

尝试将驱动程序类路径添加到提交火花的应用程序中。

/opt/Spark/spark-2.2.0_hadoop-2.7/bin/spark-submit --driver-class-path=path/to/mysqlconnector.jar test.py

答案 1 :(得分:0)

我发现了以下内容 Add jars to a Spark Job - spark-submit,它应该有助于解决您的加载问题。似乎执行程序无法获取MySQL驱动程序。