以下是代码:
public class databag extends EvalFunc<DataBag> {
TupleFactory mTupleFactory = TupleFactory.getInstance();
BagFactory mBagFactory = BagFactory.getInstance();
private DataBag result;
private String delimiterType = ": Src / dest :";
public DataBag exec(Tuple input) throws IOException {
try{
result = mBagFactory.newDefaultBag(); // change here
result.add(input);
getLogger().info("::::::: Entered try block ::::::::::::");
// create indexing for source and destination . ::: (Arraylist<Object[]>)
ConcurrentHashMap<Object, ArrayList<Integer>> srcIndexMap = new ConcurrentHashMap<Object, ArrayList<Integer>>();
ConcurrentHashMap<Object, ArrayList<Integer>> destIndexMap = new ConcurrentHashMap<Object, ArrayList<Integer>>();
// store the rows to Arraylist(Object[]) collection by converting .
ArrayList<Object[]> source = new ArrayList<Object[]>();
ArrayList<Object[]> destination = new ArrayList<Object[]>();
int srcCounter = 0;
int destCounter = 0;
ArrayList<Integer> Sourcearray = new ArrayList<Integer>();
ArrayList<Integer> Destinationarray = new ArrayList<Integer>();
for (Iterator<Tuple> iter = result.iterator(); iter.hasNext();) {
//some code here
}
我正在尝试使用for循环在数据库中迭代元组,但是对于每个元组,所有集合都重新初始化,换句话说,它从每个元组的try块执行。
输出:
INFO PigUDFpck.databag - ::::::: Entered try block ::::::::::::
PigUDFpck.databag - srcIndexMap={}
PigUDFpck.databag - inside main if loop skey=4
PigUDFpck.databag - destIndexMap.contains(skey)=false
PigUDFpck.databag - into else loop of main method
PigUDFpck.databag - ::::::: Entered try block ::::::::::::
PigUDFpck.databag - srcIndexMap={}
PigUDFpck.databag - inside main if loop skey=4
PigUDFpck.databag - destIndexMap.contains(skey)=false
PigUDFpck.databag - into else loop of main method
更新
Pig Script
REGISTER /usr/local/pig/UDF/UDFBAG.jar;
sourcenew = LOAD 'hdfs://HADOOPMASTER:54310/DVTTest/Source1.txt' USING PigStorage(',') as (ID:int,Name:chararray,FirstName:chararray ,LastName:chararray,Vertical_Name:chararray ,Vertical_ID:chararray,Gender:chararray,DOB:chararray,Degree_Percentage:chararray ,Salary:chararray,StateName:chararray);
destnew = LOAD 'hdfs://HADOOPMASTER:54310/DVTTest/Destination1.txt' USING PigStorage(',') as (ID:int,Name:chararray,FirstName:chararray ,LastName:chararray,Vertical_Name:chararray ,Vertical_ID:chararray,Gender:chararray,DOB:chararray,Degree_Percentage:chararray ,Salary:chararray,StateName:chararray);
cogroupnew = COGROUP sourcenew BY ID inner, destnew BY ID inner;
diff_data = FOREACH cogroupnew GENERATE DIFF(sourcenew,destnew);
ids = FOREACH diff_data GENERATE FLATTEN($0);
id1 = DISTINCT( FOREACH ids GENERATE $0);
src = FILTER sourcenew BY ID == id1.$0;
finalsrc = FOREACH src GENERATE *, 'Source' as Source:chararray;
dest = FILTER destnew BY ID == id1.$0;
finaldest = FOREACH dest GENERATE *, 'Destination' as Destination:chararray;
final = UNION finalsrc,finaldest ;
A = FOREACH final GENERATE PigUDFpck.databag(*);
DUMP A;
和UDF的输入如下:
(4,JOHN Hansel,JOHN,Hansel,Banking ,4,M,20-01-1994,78.65,345000,ArkansasSrc1,Source)
(4,JOHN Hansel,JOHN,Hansel,Banking ,4,M,20-01-1994,78.65,345000,ArkansaSrc2,Source)
(4,JOHN Hansel,JOHN,Hansel,Banking ,4,M,20-01-1994,78.65,345000,Arkansasdest1,Destination)
(4,JOHN Hansel,JOHN,Hansel,Banking ,4,M,20-01-1994,78.65,345000,Arkanssdest2,Destination)
非常感谢帮助。!! 提前致谢..!
答案 0 :(得分:0)
请理解PIG是一个DAG生成器,它基于DAG生成Map Reduce Jobs。
更高级别的PIG构造如LOAD,FOREACH,JOIN沸腾到较低级别的MR构造
> Load => Mapper in MR
> GENRERATE => a function call in Mapper or Reduce
> JOIN => SHUFFLE (Join in Map Reduce)
> Filter => Filter function in Map or Reduce
databag函数被调用不是一次,而是多次调用函数调用在Reducer的Mapper中执行。
对于每个输入ROW(取决于databag UDF成为mapper或reducer的一部分),dataBag将被执行。
请查看pig中的EXPAIN命令,该命令将PIG脚本转换为基础MR作业的链接
详细了解请遵循:
答案 1 :(得分:0)
好的,评论有点大
...
src = FILTER sourcenew BY ID == id1.$0;
finalsrc = FOREACH src GENERATE *, 'Source' as Source:chararray;
dest = FILTER destnew BY ID == id1.$0;
finaldest = FOREACH dest GENERATE *, 'Destination' as Source:chararray;
final = UNION finalsrc,finaldest ;
A = FOREACH (group final by ID) {
src = filter final by Source == 'Source';
dest = filter final by Source == 'Destination';
GENERATE flatten(PigUDFpck.databag(src, dest));
}
在这种情况下,UDF会收到一个包含2袋元组的元组,您可以对其进行比较。另外我很确定它可以简化(我的意思是,你可以在加载后立即执行此联合和组合 - 只需为每个人生成一个标志,告诉你它是否来源或排除)