我试图通过从存储在s3中的Avro数据中提取架构来创建配置单元表。使用s3 Kafka连接器将数据存储在s3中。我正在向生产者发布一个简单的POJO。
从Avro数据中提取模式的代码:-
for filename in os.listdir(temp_folder_path):
filename = temp_folder_path + filename
if filename.endswith('avro'):
os.system(
'java -jar /path/to/avro-jar/avro-tools-1.8.2.jar getschema {0} > {1}'.format(
filename, filename.replace('avro', 'avsc')))
提取的模式然后保存在s3存储桶中。
创建表查询:-
CREATE EXTERNAL TABLE IF NOT EXISTS `db_name_service.table_name_change_log` PARTITIONED BY (`year` bigint,
`month` bigint, `day` bigint, `hour` bigint) ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' LOCATION 's3://bucket/topics/topic_name'
TBLPROPERTIES ( 'avro.schema.url'='s3://bucket/schemas/topic_name.avsc')
错误:-
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.avro.AvroSerdeException Schema for table must be of type RECORD. Received type: BYTES)
模式:-
{ "type": "record", "name": "Employee", "doc" : "Represents an Employee at a company", "fields": [ {"name":
"firstName", "type": "string", "doc": "The persons given name"}, {"name": "nickName", "type": ["null",
"string"], "default" : null}, {"name": "lastName", "type": "string"}, {"name": "age", "type": "int",
"default": -1}, {"name": "phoneNumber", "type": "string"} ] }
我可以使用此命令./confluent-4.1.1/bin/kafka-avro-console-consumer --topic test2_singular --bootstrap-server localhost:9092 --from-beginning
{"firstName":"A:0","nickName":{"string":"C"},"lastName":"C","age":0,"phoneNumber":"123"}
{"firstName":"A:1","nickName":{"string":"C"},"lastName":"C","age":1,"phoneNumber":"123"}
答案 0 :(得分:1)
表的架构必须为RECORD类型。收到的类型:BYTES
发生这种情况的唯一方法是,如果您未将Connect接收器配置使用AvroConverter。
您还将希望从S3文件中提取模式。
提示:使用Lambda函数监视存储桶中的avro文件创建可以帮助获取架构,而无需扫描整个存储桶或随机文件 并用于通知Hive / AWS Glue表架构更新