我有一个自定义类,我想转换为JSON,但我在这里发现了一个奇怪的错误:
Exception in thread "main" scala.MatchError: (23,com.xxx.dts.dq.common.utils.DQOpsStoreProfileStatus@5f275ae4) (of class scala.Tuple2)
代码在这里:
implicit val formats = org.json4s.DefaultFormats
val A = Serialization.write(resultsMap)
println(A)
现在,如果我做一个foreach:
resultsMap.foreach(x => println(Serialization.write(x)))
我得到了一些结果,但看起来不正确:
{"_1":23,"_2":{}}
{"_1":32,"_2":{}}
元组缺少核心信息。我假设因为我们使用的自定义类导致某种问题?它有什么办法吗?
如果我要拉出地图的第二个元素并将其转换为JSON,它将如下所示:
{"errorCode":null,"id":null,"fieldType":"STRING","fieldIndex":0,"datasetFieldName":"RECORD_ID","datasetFieldSum":0.0,"datasetFieldMin":0.0,"datasetFieldMax":0.0,"datasetFieldMean":0.0,"datasetFieldSigma":0.0,"datasetFieldNullCount":0.0,"datasetFieldObsCount":0.0,"datasetFieldKurtosis":0.0,"datasetFieldSkewness":0.0,"frequencyDistribution":"(D,4488)","runStatusId":null,"lakeHdfsPath":"/user/jvy234/20140817_011500_zoot_kohls_offer_init.dat"}
另外一方面注意,这个类是用java编写的,如果那可能是罪魁祸首?
完整堆栈跟踪:
Exception in thread "main" scala.MatchError: (0,com.xxx.dts.dq.common.utils.DQOpsStoreProfileStatus@315a29f4) (of class scala.Tuple2)
at org.json4s.Extraction$.internalDecomposeWithBuilder(Extraction.scala:132)
at org.json4s.Extraction$.decomposeWithBuilder(Extraction.scala:67)
at org.json4s.Extraction$.decompose(Extraction.scala:194)
at org.json4s.jackson.Serialization$.write(Serialization.scala:22)
at com.xxx.dts.toolset.jsonWrite$.jsonClob(jsonWrite.scala:16)
at com.xxx.dts.dq.profiling.DQProfilingEngine.profile(DQProfilingEngine.scala:255)
at com.xxx.dts.dq.profiling.Profiler$.main(DQProfilingEngine.scala:64)
at com.xxx.dts.dq.profiling.Profiler.main(DQProfilingEngine.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
答案 0 :(得分:1)
我认为你只有两种方式:
为tuple2编写序列化
或
resultsMap.map(Map(_)).foreach(...)
<强>更新强> 对于序列化,您可以使用以下内容:
class Tuple2Serializer extends CustomSerializer[(String, Int)]( format => (
{
case JObject(JField(k, JInt(v))) => (k, v)
},
{
case (s: String, t: Int) => (s -> t)
} ) )
implicit val formats = org.json4s.DefaultFormats + new Tuple2Serializer