当我遍历数据集时,我希望能够使用sparkSession运行一些sql,但是它报告一个问题
val session = SparkSession
.builder()
.master("local")
.appName("Spark Hive Example")
.config("spark.sql.warehouse.dir", warehouseLocation)
.enableHiveSupport()
.getOrCreate()
import session.implicits._
session.sql("use test")
val dataFrame = session.sql("select * from t1")
dataFrame.map(row=>{
// session.sql("select * from tab2").show()
//how can i run it here?
// it report a NullPointException
}).show()
谁能帮助我,谢谢!