我发表了这样的声明:
--Insert a new column based on filename
Data = LOAD '/user/cloudera/Source_Data' using PigStorage('\t','-tagFile');
Data_Schema = FOREACH Data GENERATE
(chararray)$1 AS Date,
(chararray)$2 AS ID,
(chararray)$3 AS Interval,
(chararray)$4 AS Code,
(chararray)$5 AS S_In,
(chararray)$6 AS S_Out,
(chararray)$7 AS C_In,
(chararray)$8 AS C_Out,
(chararray)$9 AS Traffic;
--Split into different directories
SPLIT Data_Schema INTO Src1 IF (Date == '2016-06-25.txt'),
Src2 IF (Date == '2014-07-31.txt'),
Src3 IF (Date == '2016-01-01.txt');
STORE Src1 INTO '/user/cloudera/Source_DatA/2016-06-25' using PigStorage('\t');
STORE Src2 INTO '/user/cloudera/Source_Data/2014-07-31.txt' using PigStorage('\t');
STORE Src2 INTO '/user/cloudera/Source_Data/2016-01-01' using PigStorage('\t');
有一个我的原始源数据的例子:
10000 1388530800000 39 8.600870350350515 13.86183926855984 1.7218329193014124 3.424444103320796 25.972920214509095
但是当我执行它成功运行时,HDFS中的文件没有数据......
请注意,我根据文件名添加了一个新列。这就是为什么我在Foreach Statment中再增加一个专栏......
答案 0 :(得分:1)
如果您的输入文件名为2016-06-25.txt
,2014-07-31.txt
和2016-01-01.txt
,则新添加的列将由$0
引用,并且它将包含文件名。
你必须这样做:
Data_Schema = FOREACH Data GENERATE
(chararray)$0 AS Date,
(chararray)$1 AS ID,
...
或者只是在加载文件时指定模式,并保持原样:
Data = LOAD '/user/cloudera/Source_Data' using PigStorage('\t','-tagFile') as (Date:chararray, ID:chararray, ... ;