我打算将spark数据帧保存到hive表中,这样我就可以查询它们并从中提取纬度和经度,因为Spark数据帧不可迭代。
使用jupyter中的pyspark,我编写了这段代码来制作一个火花会话:
import findspark
findspark.init()
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
#readmultiple csv with pyspark
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.sql.catalogImplementation=hive").enableHiveSupport() \
.getOrCreate()
df = spark.read.csv("Desktop/train/train.csv",header=True);
Pickup_locations=df.select("pickup_datetime","Pickup_latitude",
"Pickup_longitude")
print(Pickup_locations.count())
然后我运行hiveql:
df.createOrReplaceTempView("mytempTable")
spark.sql("create table hive_table as select * from mytempTable");
我得到了这个错误:
Py4JJavaError: An error occurred while calling o24.sql.
: org.apache.spark.sql.AnalysisException: Hive support is required to CREATE Hive TABLE (AS SELECT);;
'CreateTable `hive_table`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, ErrorIfExists
+- Project [id#311, vendor_id#312, pickup_datetime#313, dropoff_datetime#314, passenger_count#315, pickup_longitude#316, pickup_latitude#317, dropoff_longitude#318, dropoff_latitude#319, store_and_fwd_flag#320, trip_duration#321]
答案 0 :(得分:3)
我以前处于这种情况。您需要将配置参数传递给spark-submit命令,以便它将hive视为spark sql的目录实现。
以下是Spark提交的外观:
spark-submit --deploy-mode cluster --master yarn --conf spark.sql.catalogImplementation=hive --class harri_sparkStreaming.com_spark_streaming.App ./target/com-spark-streaming-2.3.0-jar-with-dependencies.jar
诀窍在:--conf spark.sql.catalogImplementation=hive
希望这会有所帮助