从cassandra DB检索数据后创建RDD

时间:2015-07-30 07:36:28

标签: java cassandra apache-spark rdd

我正在为我的项目使用cassandra和spark,现在我写这个来从数据库中检索数据:

 results = session.execute("SELECT * FROM foo.test");

 ArrayList<String> supportList = new ArrayList<String>();
 for (Row row : results) {
            supportList.add(row.getString("firstColumn") + "," + row.getString("secondColumn")));
        }
        JavaRDD<String> input = sparkContext.parallelize(supportList);
        JavaPairRDD<String, Double> tuple = input.mapToPair(new PairFunction<String, String, Double>() {
            public Tuple2<String, Double> call(String x) {
                String[] parts = x.split(",");
                return new Tuple2(parts[0],String.valueOf(new Random().nextInt(30) + 1));
            }

它有效,但我想知道是否有一种很好的方式来编写上面的代码,我想要实现的是:

    scala中的
  • 我可以通过这种方式检索并填充RDD:

    val dataRDD = sc.cassandraTable[TableColumnNames]("keySpace", "table")

  • 如何在不使用支持列表或其他“讨厌”的东西的情况下用Java编写相同的东西。

更新

JavaRDD<String> cassandraRowsRDD = javaFunctions(javaSparkContext).cassandraTable("keyspace", "table")
                .map(new Function<CassandraRow, String>() {
                    @Override
                    public String call(CassandraRow cassandraRow) throws Exception {
                        return cassandraRow.toString();
                    }
                });

我正在这一行 - &gt; public String call(CassandraRow cassandraRow)此例外:

Exception in thread "main" org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:1623)
    at org.apache.spark.rdd.RDD.map(RDD.scala:286)
    at org.apache.spark.api.java.JavaRDDLike$class.map(JavaRDDLike.scala:89)
    at org.apache.spark.api.java.AbstractJavaRDDLike.map(JavaRDDLike.scala:46)
    at org.sparkexamples.cassandraExample.main.KMeans.executeQuery(KMeans.java:271)
    at org.sparkexamples.cassandraExample.main.KMeans.main(KMeans.java:67)
Caused by: java.io.NotSerializableException: org.sparkexamples.cassandraExample.main.KMeans
Serialization stack:
    - object not serializable (class: org.sparkexamples.cassandraExample.main.KMeans, value: org.sparkexamples.cassandraExample.main.KMeans@3015db78)
    - field (class: org.sparkexamples.cassandraExample.main.KMeans$2, name: this$0, type: class org.sparkexamples.cassandraExample.main.KMeans)
    - object (class org.sparkexamples.cassandraExample.main.KMeans$2, org.sparkexamples.cassandraExample.main.KMeans$2@5dbf5634)
    - field (class: org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1, name: fun$1, type: interface org.apache.spark.api.java.function.Function)
    - object (class org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1, <function1>)
    at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:38)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:164)
    ... 7 more

提前感谢。

2 个答案:

答案 0 :(得分:4)

看一下答案:RDD not serializable Cassandra/Spark connector java API

问题可能是您所显示的代码块周围的类不是可序列化的。

答案 1 :(得分:0)

我遇到了同样的问题。我在一个单独的类中实现了spark接口函数,并将其提供给了map功能。它起作用了。

样品

public a implements Function {....}

在地图中使用它

..... map(new a())

它得到了纠正。关于匿名类的spark反序列化的一些问题。