我一直在尝试将pyspark连接到EMR上的redshift数据源,但无法使其正常工作。这是我试过的:
因为spark位于EMR上的/ usr / lib / spark,而jar文件位于/ usr / lib / spark / jars
1.我试过的第一种方法 我下载了依赖项并将其放在/ usr / lib / spark / jars
中sudo wget http://repo1.maven.org/maven2/com/databricks/spark-redshift_2.10/2.0.0/spark-redshift_2.10-2.0.0.jar /usr/lib/spark/jars/
sudo wget http://repo1.maven.org/maven2/com/databricks/spark-avro_2.11/3.0.0/spark-avro_2.11-3.0.0.jar /usr/lib/spark/jars/
sudo wget https://github.com/ralfstx/minimal-json/releases/download/0.9.4/minimal-json-0.9.4.jar /usr/lib/spark/jars/
sudo wget https://s3.amazonaws.com/redshift-downloads/drivers/RedshiftJDBC42-1.2.1.1001.jar /usr/lib/spark/jars/
开始pyspark给予
pyspark --jars /usr/lib/spark/jars/spark-redshift_2.10-2.0.1.jar,/usr/lib/spark/jars/spark-avro_2.10-3.0.0.jar,/usr/lib/spark/jars/minimal-json-0.9.4.jar,/usr/lib/spark/jars/RedshiftJDBC42-1.2.1.1001.jar
用jar文件启动pyspark后
from pyspark.sql import SQLContext
sc
sql_context = SQLContext(sc)
# Read data from a query
df_users = sql_context.read \
.format("com.databricks.spark.redshift") \
.option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass") \
.option("query", "select * from table limit 200;") \
.option("tempdir", "s3n://path/for/temp/data") \
.load()
错误消息如下所示:
Traceback (most recent call last):
File "<stdin>", line 6, in <module>
File "/usr/lib/spark/python/pyspark/sql/readwriter.py", line 155, in load
return self._df(self._jreader.load())
File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/usr/lib/spark/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o55.load.
: java.lang.ClassNotFoundException: Could not load an Amazon Redshift JDBC driver; see the README for instructions on downloading and configuring the official Amazon driver.
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$getDriverClass$1.apply(RedshiftJDBCWrapper.scala:81)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$getDriverClass$1.apply(RedshiftJDBCWrapper.scala:71)
at scala.Option.getOrElse(Option.scala:121)
at com.databricks.spark.redshift.JDBCWrapper.getDriverClass(RedshiftJDBCWrapper.scala:70)
at com.databricks.spark.redshift.JDBCWrapper.getConnector(RedshiftJDBCWrapper.scala:183)
at com.databricks.spark.redshift.RedshiftRelation$$anonfun$schema$1.apply(RedshiftRelation.scala:63)
at com.databricks.spark.redshift.RedshiftRelation$$anonfun$schema$1.apply(RedshiftRelation.scala:60)
at scala.Option.getOrElse(Option.scala:121)
at com.databricks.spark.redshift.RedshiftRelation.schema$lzycompute(RedshiftRelation.scala:60)
at com.databricks.spark.redshift.RedshiftRelation.schema(RedshiftRelation.scala:59)
at org.apache.spark.sql.execution.datasources.LogicalRelation.<init>(LogicalRelation.scala:40)
at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:389)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: com.amazon.redshift.jdbc4.Driver
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at com.databricks.spark.redshift.Utils$.classForName(Utils.scala:42)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$getDriverClass$1.apply(RedshiftJDBCWrapper.scala:78)
... 24 more
另一种方法是使用包名称
启动pyspark导出SPARK_HOME ='/ usr / lib / spark'
$ SPARK_HOME / bin / pyspark --packages databricks:spark-redshift:0.4.0-hadoop2,com.databricks:spark-avro_2.11:3.2.0
这给我带来与上面相同的错误。是否有人遇到同样的问题并知道如何处理它?
提前谢谢你。
答案 0 :(得分:2)
Spark没有找到您出于某种原因下载的JDBC驱动程序 - 可能是文件权限。
在EMR上它已经到位,所以你可以像
那样引用它pyspark --jars … /usr/share/aws/redshift/jdbc/RedshiftJDBC41.jar