org.apache.pig.piggybank.storage.apachelog.CombinedLogLoader不以组合文件格式加载文件

时间:2016-02-15 21:43:05

标签: hadoop apache-pig

我有一个存储在HDFS中的Apache组合日志文件。以下是前五行的示例:

123.125.67.216 - - [02/Jan/2012:00:48:27 -0800] "GET /wiki/Dekart HTTP/1.1" 200 4512 "-" "Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2" 
209.85.238.130 - - [02/Jan/2012:00:48:27 -0800] "GET /w/index.php?title=Special:RecentChanges&feed=atom HTTP/1.1" 304 260 "-" "Feedfetcher-Google; (+http://www.google.com/feedfetcher.html; 4 subscribers; feed-id=11568779694056348047)" 
123.125.67.213 - - [02/Jan/2012:00:48:33 -0800] "GET / HTTP/1.1" 301 433 "-" "Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2" 
123.125.67.214 - - [02/Jan/2012:00:48:33 -0800] "GET /wiki/Main_Page HTTP/1.1" 200 8647 "-" "Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2" 

我正在尝试使用piggybank中的CombinedLogLoader使用Apache Pig加载此文件。这应该工作。这是我的示例代码:

grunt> raw = LOAD 'log' USING org.apache.pig.piggybank.storage.apachelog.CombinedLogLoader AS (remoteAddr, remoteLogname, user, time, method, uri, proto, status, bytes, referer, userAgent);
16/02/15 21:39:38 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
grunt> dump raw;

我得到0条记录,即使文件有数千条。

以下是我的完整输出。我做错了什么?

162493 [main] INFO  org.apache.pig.tools.pigstats.ScriptState  - Pig features used in the script: UNKNOWN
16/02/15 21:39:40 INFO pigstats.ScriptState: Pig features used in the script: UNKNOWN
16/02/15 21:39:40 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
162551 [main] WARN  org.apache.pig.data.SchemaTupleBackend  - SchemaTupleBackend has already been initialized
16/02/15 21:39:40 WARN data.SchemaTupleBackend: SchemaTupleBackend has already been initialized
162551 [main] INFO  org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer  - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
16/02/15 21:39:40 INFO optimizer.LogicalPlanOptimizer: {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
162559 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler  - File concatenation threshold: 100 optimistic? false
16/02/15 21:39:40 INFO mapReduceLayer.MRCompiler: File concatenation threshold: 100 optimistic? false
162562 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer  - MR plan size before optimization: 1
16/02/15 21:39:40 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size before optimization: 1
162562 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer  - MR plan size after optimization: 1
16/02/15 21:39:40 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size after optimization: 1
16/02/15 21:39:40 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
16/02/15 21:39:40 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
162586 [main] INFO  org.apache.pig.tools.pigstats.mapreduce.MRScriptState  - Pig script settings are added to the job
16/02/15 21:39:40 INFO mapreduce.MRScriptState: Pig script settings are added to the job
162586 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler  - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
162587 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler  - This job cannot be converted run in-process
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: This job cannot be converted run in-process
162611 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler  - Added jar file:/usr/lib/pig/lib/piggybank.jar to DistributedCache through /tmp/temp2003065886/tmp2039083441/piggybank.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/pig/lib/piggybank.jar to DistributedCache through /tmp/temp2003065886/tmp2039083441/piggybank.jar
162651 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler  - Added jar file:/usr/lib/pig/pig-0.14.0-amzn-0-core-h2.jar to DistributedCache through /tmp/temp2003065886/tmp551968774/pig-0.14.0-amzn-0-core-h2.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/pig/pig-0.14.0-amzn-0-core-h2.jar to DistributedCache through /tmp/temp2003065886/tmp551968774/pig-0.14.0-amzn-0-core-h2.jar
162670 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler  - Added jar file:/usr/lib/pig/lib/automaton-1.11-8.jar to DistributedCache through /tmp/temp2003065886/tmp710362688/automaton-1.11-8.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/pig/lib/automaton-1.11-8.jar to DistributedCache through /tmp/temp2003065886/tmp710362688/automaton-1.11-8.jar
162689 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler  - Added jar file:/usr/lib/pig/lib/antlr-runtime-3.4.jar to DistributedCache through /tmp/temp2003065886/tmp-1076004022/antlr-runtime-3.4.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/pig/lib/antlr-runtime-3.4.jar to DistributedCache through /tmp/temp2003065886/tmp-1076004022/antlr-runtime-3.4.jar
162714 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler  - Added jar file:/usr/lib/hadoop/lib/guava-11.0.2.jar to DistributedCache through /tmp/temp2003065886/tmp1810740836/guava-11.0.2.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/hadoop/lib/guava-11.0.2.jar to DistributedCache through /tmp/temp2003065886/tmp1810740836/guava-11.0.2.jar
162737 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler  - Added jar file:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar to DistributedCache through /tmp/temp2003065886/tmp-1238145114/joda-time-2.8.1.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar to DistributedCache through /tmp/temp2003065886/tmp-1238145114/joda-time-2.8.1.jar
162752 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler  - Setting up single store job
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Setting up single store job
162753 [main] INFO  org.apache.pig.data.SchemaTupleFrontend  - Key [pig.schematuple] is false, will not generate code.
16/02/15 21:39:40 INFO data.SchemaTupleFrontend: Key [pig.schematuple] is false, will not generate code.
162753 [main] INFO  org.apache.pig.data.SchemaTupleFrontend  - Starting process to move generated code to distributed cacche
16/02/15 21:39:40 INFO data.SchemaTupleFrontend: Starting process to move generated code to distributed cacche
162753 [main] INFO  org.apache.pig.data.SchemaTupleFrontend  - Setting key [pig.schematuple.classes] with classes to deserialize []
16/02/15 21:39:40 INFO data.SchemaTupleFrontend: Setting key [pig.schematuple.classes] with classes to deserialize []
162776 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - 1 map-reduce job(s) waiting for submission.
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission.
16/02/15 21:39:40 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:40 WARN mapreduce.JobResourceUploader: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
16/02/15 21:39:40 INFO input.FileInputFormat: Total input paths to process : 1
162866 [JobControl] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil  - Total input paths (combined) to process : 1
16/02/15 21:39:40 INFO util.MapRedUtil: Total input paths (combined) to process : 1
16/02/15 21:39:40 INFO mapreduce.JobSubmitter: number of splits:1
16/02/15 21:39:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1455560055771_0007
16/02/15 21:39:40 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
16/02/15 21:39:40 INFO impl.YarnClientImpl: Submitted application application_1455560055771_0007
16/02/15 21:39:40 INFO mapreduce.Job: The url to track the job: http://ip-172-31-42-90.ec2.internal:20888/proxy/application_1455560055771_0007/
163278 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - HadoopJobId: job_1455560055771_0007
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_1455560055771_0007
163278 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - Processing aliases raw
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: Processing aliases raw
163278 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - detailed locations: M: raw[2,6],null[-1,-1] C:  R: 
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: detailed locations: M: raw[2,6],null[-1,-1] C:  R: 
163283 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - 0% complete
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: 0% complete
163283 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - Running jobs are [job_1455560055771_0007]
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: Running jobs are [job_1455560055771_0007]
177841 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - 50% complete
16/02/15 21:39:55 INFO mapReduceLayer.MapReduceLauncher: 50% complete
177841 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - Running jobs are [job_1455560055771_0007]
16/02/15 21:39:55 INFO mapReduceLayer.MapReduceLauncher: Running jobs are [job_1455560055771_0007]
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
178506 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - 100% complete
16/02/15 21:39:56 INFO mapReduceLayer.MapReduceLauncher: 100% complete
178506 [main] INFO  org.apache.pig.tools.pigstats.mapreduce.SimplePigStats  - Script Statistics: 

HadoopVersion   PigVersion  UserId  StartedAt   FinishedAt  Features
2.7.1-amzn-0    0.14.0-amzn-0   hadoop  2016-02-15 21:39:40 2016-02-15 21:39:56 UNKNOWN

Success!

Job Stats (time in seconds):
JobId   Maps    Reduces MaxMapTime  MinMapTime  AvgMapTime  MedianMapTime   MaxReduceTime   MinReduceTime   AvgReduceTime   MedianReducetime    Alias   Feature Outputs
job_1455560055771_0007  1   0   5   5   5   5   0   0   0   0   raw MAP_ONLY    hdfs://ip-172-31-42-90.ec2.internal:8020/tmp/temp2003065886/tmp1853785276,

Input(s):
Successfully read 0 records (10040153 bytes) from: "hdfs://ip-172-31-42-90.ec2.internal:8020/user/hadoop/log"

Output(s):
Successfully stored 0 records in: "hdfs://ip-172-31-42-90.ec2.internal:8020/tmp/temp2003065886/tmp1853785276"

Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0

Job DAG:
job_1455560055771_0007


16/02/15 21:39:56 INFO mapreduce.SimplePigStats: Script Statistics: 

HadoopVersion   PigVersion  UserId  StartedAt   FinishedAt  Features
2.7.1-amzn-0    0.14.0-amzn-0   hadoop  2016-02-15 21:39:40 2016-02-15 21:39:56 UNKNOWN

Success!

Job Stats (time in seconds):
JobId   Maps    Reduces MaxMapTime  MinMapTime  AvgMapTime  MedianMapTime   MaxReduceTime   MinReduceTime   AvgReduceTime   MedianReducetime    Alias   Feature Outputs
job_1455560055771_0007  1   0   5   5   5   5   0   0   0   0   raw MAP_ONLY    hdfs://ip-172-31-42-90.ec2.internal:8020/tmp/temp2003065886/tmp1853785276,

Input(s):
Successfully read 0 records (10040153 bytes) from: "hdfs://ip-172-31-42-90.ec2.internal:8020/user/hadoop/log"

Output(s):
Successfully stored 0 records in: "hdfs://ip-172-31-42-90.ec2.internal:8020/tmp/temp2003065886/tmp1853785276"

Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0

Job DAG:
job_1455560055771_0007


16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
178606 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher  - Success!
16/02/15 21:39:56 INFO mapReduceLayer.MapReduceLauncher: Success!
16/02/15 21:39:56 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
178607 [main] INFO  org.apache.pig.data.SchemaTupleBackend  - Key [pig.schematuple] was not set... will not generate code.
16/02/15 21:39:56 INFO data.SchemaTupleBackend: Key [pig.schematuple] was not set... will not generate code.
16/02/15 21:39:56 INFO input.FileInputFormat: Total input paths to process : 1
178616 [main] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil  - Total input paths to process : 1
16/02/15 21:39:56 INFO util.MapRedUtil: Total input paths to process : 1
grunt> 

1 个答案:

答案 0 :(得分:0)

确保您的日志文件格式正确。我注意到你的日志文件在行的边缘包含vector_arg.erase(vector_arg.begin() + index_arg)。移除space并执行相同的脚本。 请参阅我已执行您的脚本,结果如​​下所示。

<强>脚本:

space

<强>输出:

grunt> LOAD '~/temp1.log' USING org.apache.pig.piggybank.storage.apachelog.CombinedLogLoader AS (remoteAddr, remoteLogname, user, time, method, uri, proto, status, bytes, referer, userAgent);    
grunt> dump raw;