Spark 1.3.1(PySpark)和MongoDB 3.4中的错误

时间:2017-03-30 15:44:44

标签: mongodb apache-spark pyspark

我有一个非常简单的脚本来保存MongoDB中有两列的数据帧:

from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import col, udf
from datetime import datetime


sparkConf = SparkConf().setMaster("local").setAppName("Wiki-Analyzer").set("spark.app.id", "Wiki-Analyzer")
sparkConf.set("spark.mongodb.input.uri", "...")
sparkConf.set("spark.mongodb.output.uri", "...")

sc = SparkContext(conf=sparkConf)
sqlContext = SQLContext(sc)    

charactersRdd = sc.parallelize([("Bilbo Baggins",  50), ("Gandalf", 1000), ("Thorin", 195), ("Balin", 178), ("Kili", 77), ("Dwalin", 169), ("Oin", 167), ("Gloin", 158), ("Fili", 82), ("Bombur", None)])
    characters = sqlContext.createDataFrame(charactersRdd, ["name", "age"])
    characters.write.format("com.mongodb.spark.sql.DefaultSource").mode("overwrite").save()

但是我收到以下错误:

py4j.protocol.Py4JJavaError: An error occurred while calling o91.apply.
: org.apache.spark.sql.AnalysisException: Cannot resolve column name "write" among (name, age);
        at org.apache.spark.sql.DataFrame$$anonfun$resolve$1.apply(DataFrame.scala:162)
        at org.apache.spark.sql.DataFrame$$anonfun$resolve$1.apply(DataFrame.scala:162)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.sql.DataFrame.resolve(DataFrame.scala:161)
        at org.apache.spark.sql.DataFrame.col(DataFrame.scala:447)
        at org.apache.spark.sql.DataFrame.apply(DataFrame.scala:437)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:207)
        at java.lang.Thread.run(Thread.java:745)

我正在运行脚本:

spark-submit --packages org.mongodb.spark:mongo-spark-connector_2.10:1.1.0 wiki-analyzer.py

提前谢谢!

2 个答案:

答案 0 :(得分:2)

问题在于

characters.write.format("com.mongodb.spark.sql.DefaultSource").mode("overwrite").save()

.write被解释为选择名为"写"的列。这样做的原因是您使用Spark 1.3.1,它在其通用加载/保存功能(see Spark 1.3.1 docs)中不支持.write语法;该语法仅在Spark 1.4.0+(see Spark 1.4.0 docs)中受支持。

如果您必须使用Spark 1.3.x,请尝试

characters.save(source="com.mongodb.spark.sql.DefaultSource", mode="overwrite")

(基于DataFrame.save() Python API docs for Spark 1.3.x)。

尽管如此,我建议升级到较新的Spark版本(1.6.x或2.1.x)。

答案 1 :(得分:1)

不支持Spark 1.3.x,而是支持MongoDB Spark Connector。

请参阅documentation

+-----------------------------+---------------+-----------------+
| MongoDB Connector for Spark | Spark Version | MongoDB Version |
+-----------------------------+---------------+-----------------+
|                       2.0.0 | 2.0.x         | 2.6 or later    |
|                       1.1.0 | 1.6.x         | 2.6 or later    |
+-----------------------------+---------------+-----------------+

我强烈建议升级您的Spark安装,因为自1.3以来已有许多改进