Cassandra没有使用UDT

时间:2015-06-16 15:28:42

标签: cassandra apache-spark apache-spark-sql datastax-java-driver spark-cassandra-connector

我有一个Java应用程序,其中Spark-1.4.0Cassandra-2.1.5 Cassandra-Spark-connection-1.4.0-M1

在此应用程序中,我尝试使用Dataframe或使用javaFunctions class将{Java} Class存储到Cassandra表中。{/ 1}}。

UDTs

messages.foreachRDD(new Function2<JavaRDD<Message>,Time,Void>(){
    @Override
    public Void call(JavaRDD<Message> arg0, Time arg1) throws Exception {
        javaFunctions(arg0).writerBuilder(
                Properties.getString("spark.cassandra.keyspace"),
                Properties.getString("spark.cassandra.table"),
                mapToRow(Message.class)).saveToCassandra();

但我收到了这个错误

messages.foreachRDD(new Function2<JavaRDD<Message>,Time,Void>(){
    @Override
    public Void call(JavaRDD<Message> arg0, Time arg1) throws Exception {

        SQLContext sqlContext = SparkConnection.getSqlContext();
        DataFrame df = sqlContext.createDataFrame(arg0, Message.class);

        df.write()
                .mode(SaveMode.Append)
                .option("keyspace",Properties.getString("spark.cassandra.keyspace"))
                .option("table",Properties.getString("spark.cassandra.table"))
                .format("org.apache.spark.sql.cassandra").save();

以前,我已成功使用mapper类将消息对象保存到Cassandra表中。

15/06/16 19:51:38 INFO CassandraConnector: Disconnected from Cassandra cluster: BDI Cassandra
15/06/16 19:51:39 WARN TaskSetManager: Lost task 0.0 in stage 4.0 (TID 4, 192.168.1.19): com.datastax.spark.connector.types.TypeConversionException: Cannot convert object null to com.datastax.spark.connector.UDTValue.
    at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:44)
    at com.datastax.spark.connector.types.TypeConverter$$anonfun$convert$1.apply(TypeConverter.scala:40)
    at com.datastax.spark.connector.types.UserDefinedType$$anon$1$$anonfun$convertPF$1.applyOrElse(UserDefinedType.scala:33)
    at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:40)
    at com.datastax.spark.connector.types.UserDefinedType$$anon$1.convert(UserDefinedType.scala:31)
    at com.datastax.spark.connector.writer.SqlRowWriter$$anonfun$readColumnValues$1.apply$mcVI$sp(SqlRowWriter.scala:21)
    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
    at com.datastax.spark.connector.writer.SqlRowWriter.readColumnValues(SqlRowWriter.scala:20)
    at com.datastax.spark.connector.writer.SqlRowWriter.readColumnValues(SqlRowWriter.scala:8)
    at com.datastax.spark.connector.writer.BoundStatementBuilder.bind(BoundStatementBuilder.scala:35)
    at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:106)
    at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31)
    at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:135)
    at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:119)
    at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:102)
    at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:101)
    at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:130)
    at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:101)
    at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:119)
    at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
    at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
    at org.apache.spark.scheduler.Task.run(Task.scala:70)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

这是我的Java Bean

MappingManager mapping=new MappingManager(session);
                Mapper<Message> mapper=mapping.mapper(Message.class);
                mapper.save(message);

1 个答案:

答案 0 :(得分:1)

1.4.0版本中不存在此功能。我有补丁 https://github.com/datastax/spark-cassandra-connector/pull/856 对于将来的版本,这应该是固定的。

如果您有时间,请对https://datastax-oss.atlassian.net/browse/SPARKC-271进行测试和评论。