通过java在snappy数据中插入json对象时出错

时间:2018-07-31 10:42:16

标签: java arrays json maps snappydata

我有一个表,其中包含json对象和数组作为两个字段的数据类型.scala中的表架构类似于

snSession.sql("CREATE TABLE subscriber_new14 (ID int,skills Map<STRING,INTEGER> ) USING column OPTIONS (PARTITION_BY 'ID',OVERFLOW 'true',EVICTION_BY 'LRUHEAPPERCENT' )");

我在Java中的代码是

PreparedStatement s2 = snappy.prepareStatement("insert into APP.SUBSCRIBER_NEW11(ID ,SKILLS ) values(?,?)");
JSONObject obj = new JSONObject();
String str = obj.toString();
obj.put(1, 1);
obj.put(2, 2);
s2.setObject(26,obj);
l1= s2.executeBatch();

执行此错误

    SEVERE: null
java.sql.SQLException: (SQLState=XCL12 Severity=20000) An attempt was made to put a data value of type 'org.json.simple.JSONObject' into a data value of type 'Blob' for column '26'.
    at com.pivotal.gemfirexd.internal.shared.common.error.DefaultExceptionFactory30.getSQLException(DefaultExceptionFactory30.java:44)
    at com.pivotal.gemfirexd.internal.shared.common.error.DefaultExceptionFactory30.getSQLException(DefaultExceptionFactory30.java:63)
    at com.pivotal.gemfirexd.internal.shared.common.error.ExceptionUtil.newSQLException(ExceptionUtil.java:158)
    at io.snappydata.thrift.common.Converters.newTypeSetConversionException(Converters.java:3014)
    at io.snappydata.thrift.common.Converters.newTypeSetConversionException(Converters.java:3021)
    at io.snappydata.thrift.common.Converters$14.setObject(Converters.java:2126)
    at io.snappydata.thrift.common.Converters$21.setObject(Converters.java:2874)
    at io.snappydata.thrift.internal.ClientPreparedStatement.setObject(ClientPreparedStatement.java:611)
    at snappy.SnappyOps.upsert(SnappyOps.java:117)
    at snappy.Mailthread.DataPush(Mailthread.java:55)
    at snappy.Mailthread.run(Mailthread.java:36)
    at java.lang.Thread.run(Thread.java:748)
    Blob blob = snappy.createBlob();
    blob.setBytes(1, str.getBytes());

因此我通过添加此对象将json对象更改为blob类型

 Blob blob = snappy.createBlob();
 blob.setBytes(1, str.getBytes());

但是当我通过

从快照数据库中检索时

从subscriber_new11限制10中选择技能;

贪婪的数据因此错误而消失

查询时

`select skills from  subscriber_new11 limit 10;`

获取错误

ERROR 38000: (SQLState=38000 Severity=20000) (Server=host1/103.18.248.32[1529] Thread=ThriftProcessor-0) The exception 'Job aborted due to stage failure: Task 0 in stage 18.0 failed 4 times, most recent failure: Lost task 0.3 in stage 18.0 (TID 29, host1, executor 103.18.248.32(332515):52609): java.lang.AssertionError: assertion failed
    at scala.Predef$.assert(Predef.scala:156)
    at org.apache.spark.sql.catalyst.util.SerializedMap.pointTo(SerializedMap.scala:78)
    at org.apache.spark.sql.execution.row.ResultSetDecoder.readMap(ResultSetDecoder.scala:134)
    at org.apache.spark.sql.execution.row.ResultSetDecoder.readMap(ResultSetDecoder.scala:32)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:180)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$2.hasNext(WholeStageCodegenExec.scala:571)
    at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$1.hasNext(WholeStageCodegenExec.scala:508)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:389)
    at org.apache.spark.sql.CachedDataFrame$.apply(CachedDataFrame.scala:451)
    at org.apache.spark.sql.CachedDataFrame$.apply(CachedDataFrame.scala:409)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:95)
    at org.apache.spark.scheduler.Task.run(Task.scala:126)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:326)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at org.apache.spark.executor.SnappyExecutor$$anon$2$$anon$3.run(SnappyExecutor.scala:57)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:' was thrown while evaluating an expression.

1 个答案:

答案 0 :(得分:0)

您可以从示例中引用JDBCWithComplexTypes.scala类,该类说明了如何使用JDBC客户端连接来处理复杂的数据类型。在PreparedStatement中设置值之前,应使用ComplexTypeSerializer序列化数组对象。