我在本地运行spark并希望访问位于远程Hadoop集群中的Hive表。
我可以通过在SPARK_HOME下发布beeline来访问配置单元
[ml@master spark-2.0.0]$./bin/beeline
Beeline version 1.2.1.spark2 by Apache Hive
beeline> !connect jdbc:hive2://remote_hive:10000
Connecting to jdbc:hive2://remote_hive:10000
Enter username for jdbc:hive2://remote_hive:10000: root
Enter password for jdbc:hive2://remote_hive:10000: ******
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/ml/spark/spark-2.0.0/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/10/12 19:06:39 INFO jdbc.Utils: Supplied authorities: remote_hive:10000
16/10/12 19:06:39 INFO jdbc.Utils: Resolved authority: remote_hive:10000
16/10/12 19:06:39 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://remote_hive:10000
Connected to: Apache Hive (version 1.2.1000.2.4.2.0-258)
Driver: Hive JDBC (version 1.2.1.spark2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://remote_hive:10000>
如何以编程方式从spark访问远程配置单元表?
答案 0 :(得分:15)
Spark直接连接到Hive Metastore,而不是通过HiveServer2。要配置它,
将hive-site.xml
放在classpath
上,并将hive.metastore.uri
指定为您的hive Metastore托管的位置。另请参阅How to connect to a Hive metastore programmatically in SparkSQL?
导入org.apache.spark.sql.hive.HiveContext
,因为它可以在Hive表上执行SQL查询。
定义val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
验证sqlContext.sql("show tables")
是否有效
看看connecting apache spark with apache hive remotely.
请注意直线也通过jdbc连接。从你的日志开始就很明显了。
[ml @ master spark-2.0.0] $。/ bin / beeline Beeline 1.2.1.spark2 by Apache Hive beeline> !connect jdbc:hive2:// remote_hive:10000
连接到jdbc:hive2:// remote_hive:10000
所以,请看一下这个interesting article
目前HiveServer2驱动程序不允许我们使用“Sparkling”方法1和2,我们只能依赖方法3
以下是可以实现的示例代码段
通过HiveServer2 JDBC连接将数据从一个Hadoop集群(也称为“远程”)加载到另一个集群(我的Spark居住的地方也称为“国内”)。
import java.sql.Timestamp
import scala.collection.mutable.MutableList
case class StatsRec (
first_name: String,
last_name: String,
action_dtm: Timestamp,
size: Long,
size_p: Long,
size_d: Long
)
val conn: Connection = DriverManager.getConnection(url, user, password)
val res: ResultSet = conn.createStatement
.executeQuery("SELECT * FROM stats_201512301914")
val fetchedRes = MutableList[StatsRec]()
while(res.next()) {
var rec = StatsRec(res.getString("first_name"),
res.getString("last_name"),
Timestamp.valueOf(res.getString("action_dtm")),
res.getLong("size"),
res.getLong("size_p"),
res.getLong("size_d"))
fetchedRes += rec
}
conn.close()
val rddStatsDelta = sc.parallelize(fetchedRes)
rddStatsDelta.cache()
// Basically we are done. To check loaded data:
println(rddStatsDelta.count)
rddStatsDelta.collect.take(10).foreach(println)
答案 1 :(得分:0)
在为SPARK提供 hive-ste.xml 配置之后,并且在启动 HIVE Metastore服务之后,
在连接到HIVE时,在SPARK会话中需要配置两件事:
类似的东西:
SparkSession spark=SparkSession.builder().appName("Spark_SQL_5_Save To Hive").enableHiveSupport().getOrCreate();
spark.sparkContext().conf().set("spark.sql.warehouse.dir", "/user/hive/warehouse");
spark.sparkContext().conf().set("hive.metastore.uris", "thrift://localhost:9083");
希望这对您有帮助!
答案 2 :(得分:0)
根据文档:
请注意,自Spark 2.0.0起,hive-site.xml中的hive.metastore.warehouse.dir属性已被弃用。而是使用spark.sql.warehouse.dir指定仓库中数据库的默认位置。
因此,在SparkSession
中,您需要指定spark.sql.uris
而不是hive.metastore.uris
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Python Spark SQL Hive integration example") \
.config("spark.sql.uris", "thrift://<remote_ip>:9083") \
.enableHiveSupport() \
.getOrCreate()
spark.sql("show tables").show()