是否有任何工具能够从“典型的”JSON文档创建AVRO架构。
例如:
{
"records":[{"name":"X1","age":2},{"name":"X2","age":4}]
}
我发现http://jsonschema.net/reboot/#/会生成' json-schema '
{
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "http://jsonschema.net#",
"type": "object",
"required": false,
"properties": {
"records": {
"id": "#records",
"type": "array",
"required": false,
"items": {
"id": "#1",
"type": "object",
"required": false,
"properties": {
"name": {
"id": "#name",
"type": "string",
"required": false
},
"age": {
"id": "#age",
"type": "integer",
"required": false
}
}
}
}
}
}
但我想要一个 AVRO 版本。
答案 0 :(得分:4)
使用Apache Spark和python可以轻松实现。首先从http://spark.apache.org/downloads.html下载spark分布,然后使用avro
为python安装pip
包。然后用avro包运行pyspark:
./bin/pyspark --packages com.databricks:spark-avro_2.11:3.1.0
并使用以下代码(假设input.json
文件包含一个或多个json文档,每个文档都在单独的行中):
import os, avro.datafile
spark.read.json('input.json').coalesce(1).write.format("com.databricks.spark.avro").save("output.avro")
avrofile = filter(lambda file: file.startswith('part-r-00000'), os.listdir('output.avro'))[0]
with open('output.avro/' + avrofile) as avrofile:
reader = avro.datafile.DataFileReader(avrofile, avro.io.DatumReader())
print(reader.datum_reader.writers_schema)
例如:对于包含内容的输入文件:
{'string': 'somestring', 'number': 3.14, 'structure': {'integer': 13}}
{'string': 'somestring2', 'structure': {'integer': 14}}
该脚本将导致:
{"fields": [{"type": ["double", "null"], "name": "number"}, {"type": ["string", "null"], "name": "string"}, {"type": [{"type": "record", "namespace": "", "name": "structure", "fields": [{"type": ["long", "null"], "name": "integer"}]}, "null"], "name": "structure"}], "type": "record", "name": "topLevelRecord"}