avro架构中的可选数组

时间:2012-02-23 17:09:40

标签: arrays null optional avro

我想知道是否可以有一个可选的数组。 我们假设这样的架构:

{ 
    "type": "record",
    "name": "test_avro",
    "fields" : [
        {"name": "test_field_1", "type": "long"},
        {"name": "subrecord", "type": [{
         "type": "record",
         "name": "subrecord_type",
           "fields":[{"name":"field_1", "type":"long"}]
          },"null"]
    },
    {"name": "simple_array",
    "type":{
        "type": "array",
        "items": "string"
      }
    }
  ]
}

尝试编写没有“simple_array”的avro记录会导致数据编写器中出现NPE。 对于子记录,它很好,但是当我尝试将数组定义为可选时:

{"name": "simple_array",
 "type":[{
   "type": "array",
   "items": "string"
   }, "null"]

它不会导致NPE,而是导致运行时异常:

AvroRuntimeException: Not an array schema: [{"type":"array","items":"string"},"null"]

感谢。

1 个答案:

答案 0 :(得分:17)

我认为你想要的是null和数组的联合:

{
    "type":"record",
    "name":"test_avro",
    "fields":[{
            "name":"test_field_1",
            "type":"long"
        },
        {
            "name":"subrecord",
            "type":[{
                    "type":"record",
                    "name":"subrecord_type",
                    "fields":[{
                            "name":"field_1",
                            "type":"long"
                        }
                    ]
                },
                "null"
            ]
        },
        {
            "name":"simple_array",
            "type":["null",
                {
                    "type":"array",
                    "items":"string"
                }
            ],
            "default":null
        }
    ]
}

当我在Python中使用带有示例数据的上述模式时,结果是(schema_string是上面的json字符串):

>>> from avro import io, datafile, schema
>>> from json import dumps
>>> 
>>> sample_data = {'test_field_1':12L}
>>> rec_schema = schema.parse(schema_string)
>>> rec_writer = io.DatumWriter(rec_schema)
>>> rec_reader = io.DatumReader()
>>> 
>>> # write avro file
... df_writer = datafile.DataFileWriter(open("/tmp/foo", 'wb'), rec_writer, writers_schema=rec_schema)
>>> df_writer.append(sample_data)
>>> df_writer.close()
>>> 
>>> # read avro file
... df_reader = datafile.DataFileReader(open('/tmp/foo', 'rb'), rec_reader)
>>> print dumps(df_reader.next())
{"simple_array": null, "test_field_1": 12, "subrecord": null}