我在AWS EMR集群上运行spark 2.1.0(基于以下内容 - https://aws.amazon.com/blogs/big-data/running-jupyter-notebook-and-jupyterhub-on-amazon-emr/)
我尝试查询存在的表并在远程HIVE中包含数据。 Spark正确地干扰了架构,但表内容为空。有什么想法吗?
import os
import findspark
findspark.init('/usr/lib/spark/')
# Spark related imports
from pyspark.sql import SparkSession
from pyspark import SparkContext
sc = SparkContext.getOrCreate()
spark = SparkSession.builder.config(conf=sc.getConf()).getOrCreate()
remote_hive = "jdbc:hive2://myhost:10000/mydb"
driver = "org.apache.hive.jdbc.HiveDriver"
user="user"
password = "password"
df = spark.read.format("jdbc").\
options(url=remote_hive,
driver=driver,
user=user,
password=password,
dbtable="mytable").load()
df.printSchema()
# returns the right schema
df.count()
0
答案 0 :(得分:0)
你可以尝试 -
spark\
.read.format("jdbc")\
.option("driver", driver)
.option("url", remote_url)
.option("dbtable", "mytable")
.option("user", "user")\
.option("password", "password")
.load()