无法从Pig Latin

时间:2015-05-28 20:11:46

标签: hadoop apache-pig hdfs

我在尝试从文件加载csv时遇到问题。我继续收到以下错误:

Input(s):
Failed to read data from "hdfs://localhost:9000/user/der/1987.csv"

Output(s):
Failed to produce result in                 "hdfs://localhost:9000/user/der/totalmiles3"

查看我在本地计算机上安装的Hadoop hdfs,我看到了该文件。实际上,该文件位于多个位置,例如/,/ user /等。

hdfs dfs -ls /user/der
Found 1 items
-rw-r--r--   1 der supergroup  127162942 2015-05-28 12:42 
/user/der/1987.csv

我的猪脚本如下:

records = LOAD '1987.csv' USING PigStorage(',') AS
       (Year, Month, DayofMonth, DayOfWeek, DepTime, CRSDepTime, ArrTime,
         CRSArrTime, UniqueCarrier, FlightNum, TailNum,ActualElapsedTime,
      CRSElapsedTime,AirTime,ArrDelay, DepDelay, Origin, Dest,
         Distance:int, TaxIn, TaxiOut, Cancelled,CancellationCode,
          Diverted, CarrierDelay, WeatherDelay, NASDelay,     SecurityDelay,
     lateAircraftDelay);
milage_recs= GROUP records ALL;
tot_miles = FOREACH milage_recs GENERATE SUM(records.Distance);
STORE tot_miles INTO 'totalmiles3';

我用-x local选项运行了pig。我能够使用-x local选项从本地硬盘读取文件。得到正确的答案,Hadoop namenode上的尾部-f没有滚动,证明我在硬盘上本地运行文件:

pig  -x local totalmiles.pig

现在我收到了错误。似乎hadoop名称服务器正在获取请求,因为我使用tail -f并看到日志滚动。

pig totalmiles.pig

records = LOAD '/user/der/1987.csv' USING PigStorage(',') AS

我收到以下错误:

Failed Jobs: 
  JobId Alias   Feature Message Outputs 
    job_local602774674_0001 milage_recs,records,tot_miles           
GROUP_BY,COMBINER   Message: ENOENT: No such file or directory 

        at  
org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native     Method) 

    at        
org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230) 

at         org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.j
    ava:724) 
    at     


org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:    502) 
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSys   tem.java:600) 
    at     
org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUpl
    oader.java:94) 
        at      
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitte
   r.java:98) 
    at      org .apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:193) 

...blah...

Input(s): 
Failed to read data from "/user/der/1987.csv" 

Output(s): 
Failed to produce result in "hdfs://localhost:9000/user/der/totalmiles3" 

我使用hdfs来检查mkdir的权限,这似乎没问题:

hdfs dfs -mkdir /user/der/temp2 
hdfs dfs -ls /user/der 

Found 3 items 
-rw-r--r--   1 der supergroup  127162942 2015-05-28 12:42  
/user/der/1987.csv 
drwxr-xr-x   - der supergroup          0 2015-05-28 16:21     
/user/der/temp2 
drwxr-xr-x   - der supergroup          0 2015-05-28 15:57 
/user/der/test 

我尝试使用mapreduce选项并仍然得到相同类型的错误:

 pig -x mapreduce totalmiles.pig

 5-05-28 20:58:44,608 [JobControl] INFO  
  org.apache.hadoop.mapreduce.lib.jobc
    ontrol.ControlledJob - PigLatin:totalmiles.pig while            
  submitting 

    ENOENT: No such file or directory
        at 
org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Na       at         
org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230)
at 

org.apache.hadoop.fs.RawLocalFileSystem.setPermissi     at  
org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSy
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:600)
at     
org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(Job
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(Jo
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobS
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)

我的core-site.xml的临时dir如下:

<property>
      <name>hadoop.tmp.dir</name>
     <value>/usr/local/hadoop</value>
      <description>A base for other temporary directories.    
</description>
 </property>

和我的hdfs-site.xmlnamenodedatanode,如下所示:

 <property>
     <name>dfs.namenode.name.dir</name>  
     <value>file:/usr/local/hadoop/dfs/namenode</value>
  </property>





 <property> 
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop/dfs/datanode</value>
    </property>

我已经进一步调试了这个问题。看来我的namenode配置错误,因为我无法重新格式化它:

[hadoop hdfs formatting gets error failed for Block pool]

1 个答案:

答案 0 :(得分:1)

我们必须将hadoop文件路径指定为:/user/der/1987.csv

 records = LOAD '/user/der/1987.csv' USING PigStorage(',') AS
   (Year, Month, DayofMonth, DayOfWeek, DepTime, CRSDepTime, ArrTime,
     CRSArrTime, UniqueCarrier, FlightNum, TailNum,ActualElapsedTime,
  CRSElapsedTime,AirTime,ArrDelay, DepDelay, Origin, Dest,
     Distance:int, TaxIn, TaxiOut, Cancelled,CancellationCode,
      Diverted, CarrierDelay, WeatherDelay, NASDelay,     SecurityDelay,
 lateAircraftDelay);

如果要进行测试,可以在执行pig脚本的路径中找到文件:1987.csv,即将1987.csv和.pig文件放在同一位置。