Circe无法将原始json转换为case类错误:找不到io.circe.generic.decoding.DerivedDecoder类型的惰性隐式值

时间:2019-01-11 16:00:20

标签: json scala kubernetes case-class

我已经定义了几个用于JSON表示的案例类,但是由于存在很多嵌套的案例类,因此我不确定我是否做得正确。 诸如spec,meta之类的实体的类型均为JSONObject以及Custom对象本身。

这是我定义的所有类:

  case class CustomObject(apiVersion: String,kind: String, metadata: Metadata,spec: Spec,labels: Object,version: String)

  case class Metadata(creationTimestamp: String, generation: Int, uid: String,resourceVersion: String,name: String,namespace: String,selfLink: String)

  case class Spec(mode: String,image: String,imagePullPolicy: String, mainApplicationFile: String,mainClass: String,deps: Deps,driver: Driver,executor: Executor,subresources: Subresources)

  case class Driver(cores: Double,coreLimit: String,memory: String,serviceAccount: String,labels: Labels)

  case class Executor(cores: Double,instances: Double,memory: String,labels: Labels)

  case class Labels(version: String)

  case class Subresources(status: Status)

  case class Status()

  case class Deps()

这是我需要转换的自定义K8s对象的JSON结构:

{
    "apiVersion": "sparkoperator.k8s.io/v1alpha1",
    "kind": "SparkApplication",
    "metadata": {
        "creationTimestamp": "2019-01-11T15:58:45Z",
        "generation": 1,
        "name": "spark-example",
        "namespace": "default",
        "resourceVersion": "268972",
        "selfLink": "/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/spark-example",
        "uid": "uid"
    },
    "spec": {
        "deps": {},
        "driver": {
            "coreLimit": "1000m",
            "cores": 0.1,
            "labels": {
                "version": "2.4.0"
            },
            "memory": "1024m",
            "serviceAccount": "default"
        },
        "executor": {
            "cores": 1,
            "instances": 1,
            "labels": {
                "version": "2.4.0"
            },
            "memory": "1024m"
        },
        "image": "gcr.io/ynli-k8s/spark:v2.4.0,
        "imagePullPolicy": "Always",
        "mainApplicationFile": "http://localhost:8089/spark_k8s_airflow.jar",
        "mainClass": "org.apache.spark.examples.SparkExample",
        "mode": "cluster",
        "subresources": {
            "status": {}
        },
        "type": "Scala"
    }
}

更新: 我想使用Circe将JSON转换为case类,但是,使用此类我会遇到此错误:

Error: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder[dataModel.CustomObject]
    implicit val customObjectDecoder: Decoder[CustomObject] = deriveDecoder[CustomObject]

我为所有案例类定义了隐式解码器:

 implicit val customObjectLabelsDecoder: Decoder[Labels] = deriveDecoder[Labels]
    implicit val customObjectSubresourcesDecoder: Decoder[Subresources] = deriveDecoder[Subresources]
    implicit val customObjectDepsDecoder: Decoder[Deps] = deriveDecoder[Deps]
    implicit val customObjectStatusDecoder: Decoder[Status] = deriveDecoder[Status]
    implicit val customObjectExecutorDecoder: Decoder[Executor] = deriveDecoder[Executor]
    implicit val customObjectDriverDecoder: Decoder[Driver] = deriveDecoder[Driver]
    implicit val customObjectSpecDecoder: Decoder[Spec] = deriveDecoder[Spec]
    implicit val customObjectMetadataDecoder: Decoder[Metadata] = deriveDecoder[Metadata]
    implicit val customObjectDecoder: Decoder[CustomObject] = deriveDecoder[CustomObject]

2 个答案:

答案 0 :(得分:2)

您无法导出CustomObject的解码的原因是由于labels: Object成员。

在所有情况下,所有解码都是由静态类型驱动的,并且对于诸如ObjectAny这样没有有用静态信息的类型,circe不提供编码器或解码器。

如果您将案例类更改为以下内容:

case class CustomObject(apiVersion: String, kind: String, metadata: Metadata, spec: Spec)

...并保留其余代码,并保留导入内容:

import io.circe.Decoder, io.circe.generic.semiauto.deriveDecoder

并将您的JSON文档定义为doc(在"image": "gcr.io/ynli-k8s/spark:v2.4.0,行中添加引号以使其成为有效的JSON),以下操作应该可以正常工作:

scala> io.circe.jawn.decode[CustomObject](doc)
res0: Either[io.circe.Error,CustomObject] = Right(CustomObject(sparkoperator.k8s.io/v1alpha1,SparkApplication,Metadata(2019-01-11T15:58:45Z,1,uid,268972,spark-example,default,/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/spark-example),Spec(cluster,gcr.io/ynli-k8s/spark:v2.4.0,Always,http://localhost:8089/spark_k8s_airflow.jar,org.apache.spark.examples.SparkExample,Deps(),Driver(0.1,1000m,1024m,default,Labels(2.4.0)),Executor(1.0,1.0,1024m,Labels(2.4.0)),Subresources(Status()))))

尽管有其他答案之一,但是circe绝对可以派生出没有成员的案例类的编码器和解码器-这绝对不是问题所在。

作为旁注,我希望有比这更好的错误消息:

Error: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder[dataModel.CustomObject

但是考虑到现在一般通用必须使用Shapeless的Lazy的方式,这是我们可以获得的最好方法。您可以尝试使用circe-derivation来替代circe-generic的半自动派生方法,该方法具有更好的错误消息(以及其他一些优点),或者可以使用splain这样的编译器插件旨在甚至在出现shapeless.Lazy之类的情况时也能提供更好的错误消息。

最后一点,您可以通过推断deriveDecoder上的type参数来稍微整理一下半自动定义:

implicit val customObjectLabelsDecoder: Decoder[Labels] = deriveDecoder

这完全是一个品味问题,但是我发现它的阅读噪音较小。

答案 1 :(得分:-2)

对我看来正确。您遇到任何问题吗?