我有一个文件,每个文件有250k行。 我正在尝试加载它们:
apache_log = LOAD 'apache_log/httpd-www02-access.log.2014-03-17-16*' USING TextLoader AS (line:chararray);
apache_row = FOREACH apache_log GENERATE FLATTEN (
REGEX_EXTRACT_ALL
(line,'^".*?([\\d{1,3}.\\d{1,3}.\\d{1,3}.\\d{1,3}]*)" \\[(\\d{2}\\/\\w+\\/\\d{4}:\\d{2}:\\d{2}:\\d{2} \\+\\d{4})] (\\S+) (\\S+) "(.+?)" (\\S+) (\\S+) "([^"]*)" "(.*)" "(.*)"'))
AS (ip: chararray, datetime: chararray, session_id: chararray, time_of_request:chararray, request: chararray, status: chararray, size: chararray, referer : chararray, cookie: chararray, user_agent: chararray);
为了确保我获得所需的列数:
apache_row_good = FILTER apache_row by ARITY(*) == 10;
最后我想把它存入HCat:
store apache_row_good into 'apache_log' using org.apache.hcatalog.pig.HCatStorer();
在决赛桌中有列:
ip
datetime
session_id
time_of_request
request
status
size
referer
cookie
user_agent
以上所有列均为字符串类型。
我收到错误:
Input(s):
Failed to read data from "hdfs://hadoop1:8020/apache_log/httpd-www02-access.log.2014-03-17-16*"
Output(s):
Failed to produce result in "stage.atg_apache_log"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_201403071023_0027
显然我的文件中有些行不适合我的REGEXP
但是哪些?
我该如何追踪呢?
请注意,我已经成功加载了许多相似(不同日期)的文件
我将不胜感激任何建议因为我被卡住了我不填写如检查文件中的每一行......
问候
的Pawel
答案 0 :(得分:0)
常见的Hadoop发行版提供了#34; human" JobTracker和TaskTracker的Web界面。
以下是它的样子,特别是对于Hadoop 1.xx:Amazon Elastic MapReduce docs - 找出"查看任务日志"部分。