我在Apache配置单元中使用Python用户定义的函数将字符从小写字符更改为大写字母。关闭运算符时,我收到错误" Hive运行时错误"。
以下是我尝试的查询:
describe table1;
OK
item string
count int
city string
select * from table1;
aaa 1 tokyo
aaa 2 london
bbb 3 washington
ccc 4 moscow
ddd 5 bejing
从上表中,项目和城市字段应从小写更改为大写,计数应增加10。
使用的Python脚本:
cat caseconvert.py
import sys
import string
for line in sys.stdin:
line = line.strip()
item,count,city=line.split('\t')
ITEM1=item.upper()
COUNT1=count+10
CITY1=city.upper()
print '\t'.join([ITEM1,str(COUNT1),FRUIT1])
将table1数据插入table2
create table table2(ITEM1 string, COUNT1 int, CITY1 string) row format delimited fields terminated by ',';
add FILE caseconvert.py
insert overwrite table table2 select TRANSFORM(item,count,city) using 'python caseconvert.py' as (ITEM1,COUNT1,CITY1) from table1;
如果我执行,我收到以下错误。我无法追查这个问题。我可以知道它出错吗?
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201508151858_0014, Tracking URL = http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201508151858_0014
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201508151858_0014
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2015-08-15 22:24:06,212 Stage-1 map = 0%, reduce = 0%
2015-08-15 22:25:01,559 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201508151858_0014 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201508151858_0014
Examining task ID: task_201508151858_0014_m_000002 (and more) from job job_201508151858_0014
Task with the most failures(4):
-----
Task ID:
task_201508151858_0014_m_000000
URL:
http://localhost.localdomain:50030/taskdetails.jsp?jobid=job_201508151858_0014&tipid=task_201508151858_0014_m_000000
-----
Diagnostic Messages for this Task:
java.lang.RuntimeException: Hive Runtime Error while closing operators
at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:224)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20003]: An error occurred when trying to close the Operator running your custom script.
at org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:488)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:570)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:5
FAILED: Execution Error, return code 20003 from org.apache.hadoop.hive.ql.exec.MapRedTask. An error occurred when trying to close the Operator running your custom script.
MapReduce Jobs Launched:
Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
答案 0 :(得分:0)
在Python脚本的最后一行,您将输出打印到STDOUT,在没有定义它的情况下调用FRUIT1。这应该是CITY1。您还导入了字符串但未使用它。我写的脚本有点不同:
import sys
import string
while True:
line = sys.stdin.readline()
if not line:
break
line = string.strip(line, '\n ')
item,count,city=string.split(line, '\t')
ITEM1=item.upper()
COUNT1=count+10
CITY1=city.upper()
print '\t'.join([ITEM1,str(COUNT1),CITY1])
然后,我使用CREATE TABLE AS SELECT查询(假设TABLE1和你的python脚本都存在于HDFS中):
create table TABLE2
as select transform(item, count, city)
using 'hdfs:///user/username/caseconvert.py'
as (item1 string, count1 string, city1 string)
FROM TABLE1;
这对我有用。但是,使用Hive内置函数可以更轻松地进行所需的转换:
upper(字符串A)>>>返回将A的所有字符转换为大写字符所产生的字符串。例如,上层(' fOoBaR')会导致' FOOBAR'。
当然对于城市来说,你可以做到:(城市+10)AS city1。
因此,可以按如下方式创建TABLE2:
CREATE TABLE2
AS SELECT
UPPER(ITEM) AS ITEM1,
COUNT + 10 AS COUNT1,
UPPER CITY AS CITY1
FROM TABLE1;
比编写自定义UDF麻烦少得多。