使用Spark连接到MS SQL Server

时间:2015-01-30 20:43:09

标签: apache-spark

我尝试使用Spark JdbcRDD从SQL Server数据库加载数据。我正在使用Microsoft JDBC驱动程序的4.0版。这是一段代码:

 public JdbcRDD<Object[]> load(){
    SparkConf conf = new SparkConf().setMaster("local").setAppName("myapp");
    JavaSparkContext context = new JavaSparkContext(conf);
    DbConnection connection = new DbConnection("com.microsoft.sqlserver.jdbc.SQLServerDriver","my-connection-string","test","test");
    JdbcRDD<Object[]> jdbcRDD = new JdbcRDD<Object[]>(context.sc(),connection,"select * from <table>",1,1000,1,new JobMapper(),ClassManifestFactory$.MODULE$.fromClass(Object[].class));
    return jdbcRDD;
}

public static void main(String[] args) {
    JdbcRDD<Object[]> jdbcRDD = load();
    JavaRDD<Object[]> javaRDD = JavaRDD.fromRDD(jdbcRDD, ClassManifestFactory$.MODULE$.fromClass(Object[].class));
    List<String> ids = javaRDD.map(new Function<Object[],String>(){
       public String call(final Object[] record){
           return (String)record[0];
       }
    }).collect();
    System.out.println(ids);
}

我得到以下异常:

java.lang.AbstractMethodError: com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.isClosed()Z
at org.apache.spark.rdd.JdbcRDD$$anon$1.close(JdbcRDD.scala:109)
at org.apache.spark.util.NextIterator.closeIfNeeded(NextIterator.scala:63)
at org.apache.spark.rdd.JdbcRDD$$anon$1$$anonfun$1.apply(JdbcRDD.scala:74)
at org.apache.spark.rdd.JdbcRDD$$anon$1$$anonfun$1.apply(JdbcRDD.scala:74)
at org.apache.spark.TaskContextImpl$$anon$1.onTaskCompletion(TaskContextImpl.scala:49)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:68)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:66)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:58)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:695)

这是JobMapper的定义:

public class JobMapper extends AbstractFunction1<ResultSet, Object[]> implements Serializable {

private static final Logger logger = Logger.getLogger(JobMapper.class);
public Object[] apply(ResultSet row){
    return JdbcRDD.resultSetToObjectArray(row);
}

}

2 个答案:

答案 0 :(得分:1)

我发现了我正在做的事情。有几件事情:

  1. 它似乎不适用于驱动程序的4.0版。所以我将其更改为使用版本3.0
  2. JdbcRDD的文档指出SQL字符串必须包含两个指示查询范围的参数。所以我不得不改变查询。
  3. JdbcRDD<Object[]> jdbcRDD = new JdbcRDD<Object[]>(context.sc(),connection,"SELECT * FROM <table> where Id >= ? and Id <= ?",1,20,1,new JobMapper(),ClassManifestFactory$.MODULE$.fromClass(Object[].class));

    参数1和20表示查询的范围。

答案 1 :(得分:1)

注意:此解决方案假设您拥有最新版本的Spark(1.3.0)。我只是在独立模式下尝试过这个。

我遇到了类似的问题,但这是我如何使用它。 首先确保SQL Server的驱动程序jar(sqljdbc40.jar)放在以下目录中:

YOUR_SPARK_HOME /型芯/目标/罐

这将确保在Spark计算其类路径时加载驱动程序。

接下来在您的代码中,有以下内容:

JavaSparkContext sc = new JavaSparkContext("local", appName); //master is set to local
SQLContext sqlContext = new SQLContext(sc);

//This url connection string is not complete (include your credentials or integrated security options)
String url = "jdbc:sqlserver://" + host + ":1433;DatabaseName=" + database;
String driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver";

//Settings for SQL Server jdbc connection
Map<String, String> options = new HashMap<>();
options.put("driver", driver);
options.put("url", url);
options.put("dbtable", tablename);

//Get table from SQL Server and save data in a DataFrame using JDBC
DataFrame jdbcDF = sqlContext.load("jdbc", options);
jdbcDF.printSchema();
long numRecords = jdbcDF.count();
System.out.println("Number of records in jdbcDF: " + numRecords);
jdbcDF.show();