使用spark从MSSQL服务器读取数据时出现java.lang.NullPointerException

时间:2017-05-24 08:44:50

标签: scala apache-spark apache-spark-sql spark-streaming spark-dataframe

我遇到使用Cloudera Spark从MSSQL服务器读取数据的问题。我不确定问题出在哪里以及导致问题的原因。

这是我的build.sbt

val sparkversion = "1.6.0-cdh5.10.1"
name := "SimpleSpark"
organization := "com.huff.spark"
version := "1.0"
scalaVersion := "2.10.5"
mainClass in Compile := Some("com.huff.spark.example.SimpleSpark")
assemblyJarName in assembly := "mssql.jar"


libraryDependencies ++= Seq(
    "org.apache.spark" %% "spark-streaming-kafka" % "1.6.0" % "provided",
    "org.apache.spark" %% "spark-streaming" % "1.6.0" % "provided",
    "org.apache.spark" % "spark-core_2.10" % sparkversion  % "provided", // to test in cluseter
    "org.apache.spark" % "spark-sql_2.10" % sparkversion % "provided" // to test in cluseter
)

resolvers += "Confluent IO" at "http://packages.confluent.io/maven"
resolvers += "Cloudera Repository" at "https://repository.cloudera.com/artifactory/cloudera-repos"

这是我的scala来源:

package com.huff.spark.example

import org.apache.spark.sql._
import java.sql.{Connection, DriverManager}
import java.util.Properties
import org.apache.spark.{SparkContext, SparkConf}

object SimpleSpark {
    def main(args: Array[String]) {
        val sourceProp = new java.util.Properties
        val conf = new SparkConf().setAppName("SimpleSpark").setMaster("yarn-cluster")  //to test in cluster
        val sc = new SparkContext(conf)
        var SqlContext = new SQLContext(sc)
        val driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"

        val jdbcDF = SqlContext.read.format("jdbc").options(Map("url" -> "jdbc:sqlserver://sqltestsrver;databaseName=LEh;user=sparkaetl;password=sparkaetl","driver" -> driver,"dbtable" -> "StgS")).load()

            jdbcDF.show(5)
    }
}

这是我看到的错误:

17/05/24 04:35:20 ERROR ApplicationMaster: User class threw exception: java.lang.NullPointerException
java.lang.NullPointerException
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:155)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
    at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:222)
    at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:146)
    at com.huff.spark.example.SimpleSpark$.main(SimpleSpark.scala:16)
    at com.huff.spark.example.SimpleSpark.main(SimpleSpark.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:552)
17/05/24 04:35:20 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.NullPointerException)

我知道第16行的问题是:

val jdbcDF = SqlContext.read.format("jdbc").options(Map("url" -> "jdbc:sqlserver://sqltestsrver;databaseName=LEh;user=sparkaetl;password=sparkaetl","driver" -> driver,"dbtable" -> "StgS")).load()

但我无法确定究竟是什么问题。这与访问有关吗? (这是值得怀疑的),连接参数的问题(错误信息会说出来),或其他我不知道的事情。在此先感谢:-)

2 个答案:

答案 0 :(得分:1)

如果您使用的是Azure SQL Server,请从Azure门户复制jdbc连接字符串。我尝试了,对我有用。

使用scala模式的Azure数据块:

import com.microsoft.sqlserver.jdbc.SQLServerDriver
import java.sql.DriverManager
import org.apache.spark.sql.SQLContext
import sqlContext.implicits._

// MS SQL JDBC Connection String ... 
val jdbcSqlConn = "jdbc:sqlserver://***.database.windows.net:1433;database=**;user=***;password=****;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"

// Loading the ms sql table via spark context into dataframe
val jdbcDF = sqlContext.read.format("jdbc").options(
Map("url" -> jdbcSqlConn,
"driver" -> "com.microsoft.sqlserver.jdbc.SQLServerDriver",
"dbtable" -> "***")).load()

// Registering the temp table so that we can SQL like query against the table 
jdbcDF.registerTempTable("yourtablename")
// selecting only top 10 rows here but you can use any sql statement
val yourdata = sqlContext.sql("SELECT * FROM yourtablename LIMIT 10")
// display the data 
yourdata.show()

答案 1 :(得分:0)

当您尝试关闭数据库Connection时,会发生NPE,表明系统无法通过JdbcUtils.createConnectionFactory获取正确的连接器。您应该检查连接URL和失败日志。