我在data1
1 3
1 2
5 1
在data2
2 3
2 4
然后我试着将它们读入猪
d1 = LOAD 'data1';
d2 = foreach d1 generate flatten(STRSPLIT($0, ' +')) as (f1:int,f2:int);
d3 = LOAD 'data2' ;
d4 = foreach d3 generate flatten(STRSPLIT($0, ' +')) as (f1:int,f2:int);
data = join d2 by f1, d4 by f2;
然后我得到了
2013-08-04 00:48:26,032 [Thread-21] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local_0005
java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer
at org.apache.pig.backend.hadoop.HDataType.getWritableComparableTypes(HDataType.java:85)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map.collect(PigGenericMapReduce.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:285)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:278)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
有人能帮帮我吗?谢谢。
答案 0 :(得分:7)
首先,我为输入定义一个简单的模式。根据您的示例,我假设您的输入是文本文件
现在你得到了ClassCastException,因为只是应用模式(f1:int,f2:int)很遗憾不会进行任何转换。您需要将STRSPLIT
的输出模式显式转换为(tuple(int,int))
,以便展平可以从中生成f1:int and f2:int
。即:
d1 = LOAD 'data1' as (line:chararray);
d2 = foreach d1 generate flatten((tuple(int,int))(STRSPLIT($0, ' +')))
as (f1:int,f2:int);
d3 = LOAD 'data2' as (line:chararray);
d4 = foreach d3 generate flatten((tuple(int,int))(STRSPLIT($0, ' +')))
as (f1:int,f2:int);
data = join d2 by f1, d4 by f2;
答案 1 :(得分:0)
如果您在Pig中使用UDF并获得此强制转换例外,则除了检查Pig脚本之外,还要检查UDF脚本并确保实际返回的值类型与@outputSchema
类型匹配。