需要帮助!!!
我正在使用flume
将Twitter Feed导入hdfs并将其加载到hive
进行分析。
步骤如下:
hdfs中的数据:
我在avro schema
文件中描述了avsc
并将其放入hadoop:
{"type":"record",
"name":"Doc",
"doc":"adoc",
"fields":[{"name":"id","type":"string"},
{"name":"user_friends_count","type":["int","null"]},
{"name":"user_location","type":["string","null"]},
{"name":"user_description","type":["string","null"]},
{"name":"user_statuses_count","type":["int","null"]},
{"name":"user_followers_count","type":["int","null"]},
{"name":"user_name","type":["string","null"]},
{"name":"user_screen_name","type":["string","null"]},
{"name":"created_at","type":["string","null"]},
{"name":"text","type":["string","null"]},
{"name":"retweet_count","type":["boolean","null"]},
{"name":"retweeted","type":["boolean","null"]},
{"name":"in_reply_to_user_id","type":["long","null"]},
{"name":"source","type":["string","null"]},
{"name":"in_reply_to_status_id","type":["long","null"]},
{"name":"media_url_https","type":["string","null"]},
{"name":"expanded_url","type":["string","null"]}]}
我编写了一个.hql文件来创建一个表并在其中加载数据:
create table tweetsavro
row format serde
'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
stored as inputformat
'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
outputformat
'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
tblproperties ('avro.schema.url'='hdfs:///avro_schema/AvroSchemaFile.avsc');
load data inpath '/test/twitter_data/FlumeData.*' overwrite into table tweetsavro;
我已成功运行.hql文件,但是当我在hive中运行select *from <tablename>
命令时,它显示以下错误:
tweetsavro的输出是:
hive> desc tweetsavro;
OK
id string
user_friends_count int
user_location string
user_description string
user_statuses_count int
user_followers_count int
user_name string
user_screen_name string
created_at string
text string
retweet_count boolean
retweeted boolean
in_reply_to_user_id bigint
source string
in_reply_to_status_id bigint
media_url_https string
expanded_url string
Time taken: 0.697 seconds, Fetched: 17 row(s)
答案 0 :(得分:6)
我面临同样的问题。问题存在于timestamp字段(在您的情况下为“created_at”列),我试图将其作为字符串插入到我的新表中。我的假设是这个数据在我的源代码中将采用[ "null","string"]
格式。我分析了从sqoop import --as-avrodatafile进程生成的源avro架构。从导入生成的avro架构具有以下时间戳列的签名
{
"name" : "order_date",
"type" : [ "null", "long" ],
"default" : null,
"columnName" : "order_date",
"sqlType" : "93"
},
SqlType 93代表Timestamp数据类型。所以在我的目标表Avro Schema文件中,我将数据类型更改为“long”,这解决了这个问题。我的猜测可能是你的一个列中数据类型的不匹配。