Python中的Hadoop流式传输作业失败(不成功)

时间:2011-01-18 11:19:55

标签: python streaming hadoop mapreduce

我正在尝试使用Python脚本在Hadoop Streaming上运行Map-Reduce作业并获得与Hadoop Streaming Job failed error in python相同的错误,但这些解决方案对我不起作用。

当我运行“cat sample.txt | ./p1mapper.py | sort | ./p1reducer.py”

时,我的脚本运行正常

但是当我运行以下内容时:

./bin/hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar \
    -input "p1input/*" \
    -output p1output \
    -mapper "python p1mapper.py" \
    -reducer "python p1reducer.py" \
    -file /Users/Tish/Desktop/HW1/p1mapper.py \
    -file /Users/Tish/Desktop/HW1/p1reducer.py

(注意:即使我删除了“python”或输入-mapper和-reducer的完整路径名,结果也一样)

这是我得到的输出:

packageJobJar: [/Users/Tish/Desktop/HW1/p1mapper.py, /Users/Tish/Desktop/CS246/HW1/p1reducer.py, /Users/Tish/Documents/workspace/hadoop-0.20.2/tmp/hadoop-unjar4363616744311424878/] [] /var/folders/Mk/MkDxFxURFZmLg+gkCGdO9U+++TM/-Tmp-/streamjob3714058030803466665.jar tmpDir=null
11/01/18 03:02:52 INFO mapred.FileInputFormat: Total input paths to process : 1
11/01/18 03:02:52 INFO streaming.StreamJob: getLocalDirs(): [tmp/mapred/local]
11/01/18 03:02:52 INFO streaming.StreamJob: Running job: job_201101180237_0005
11/01/18 03:02:52 INFO streaming.StreamJob: To kill this job, run:
11/01/18 03:02:52 INFO streaming.StreamJob: /Users/Tish/Documents/workspace/hadoop-0.20.2/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201101180237_0005
11/01/18 03:02:52 INFO streaming.StreamJob: Tracking URL: http://www.glassdoor.com:50030/jobdetails.jsp?jobid=job_201101180237_0005
11/01/18 03:02:53 INFO streaming.StreamJob:  map 0%  reduce 0%
11/01/18 03:03:05 INFO streaming.StreamJob:  map 100%  reduce 0%
11/01/18 03:03:44 INFO streaming.StreamJob:  map 50%  reduce 0%
11/01/18 03:03:47 INFO streaming.StreamJob:  map 100%  reduce 100%
11/01/18 03:03:47 INFO streaming.StreamJob: To kill this job, run:
11/01/18 03:03:47 INFO streaming.StreamJob: /Users/Tish/Documents/workspace/hadoop-0.20.2/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201101180237_0005
11/01/18 03:03:47 INFO streaming.StreamJob: Tracking URL: http://www.glassdoor.com:50030/jobdetails.jsp?jobid=job_201101180237_0005
11/01/18 03:03:47 ERROR streaming.StreamJob: Job not Successful!
11/01/18 03:03:47 INFO streaming.StreamJob: killJob...
Streaming Job Failed!

对于每个失败/被杀死的任务尝试:

Map output lost, rescheduling: getMapOutput(attempt_201101181225_0001_m_000000_0,0) failed :
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/jobcache/job_201101181225_0001/attempt_201101181225_0001_m_000000_0/output/file.out.index in any of the configured local directories
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:389)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:138)
    at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2887)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
    at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
    at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
    at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:324)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
    at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
    at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)

以下是我的Python脚本: p1mapper.py

#!/usr/bin/env python

import sys
import re

SEQ_LEN = 4

eos = re.compile('(?<=[a-zA-Z])\.')   # period preceded by an alphabet
ignore = re.compile('[\W\d]')

for line in sys.stdin:
    array = re.split(eos, line)
    for sent in array:
        sent = ignore.sub('', sent)
        sent = sent.lower()
        if len(sent) >= SEQ_LEN:
            for i in range(len(sent)-SEQ_LEN + 1):
                print '%s 1' % sent[i:i+SEQ_LEN]

p1reducer.py

#!/usr/bin/env python

from operator import itemgetter
import sys

word2count = {}

for line in sys.stdin:
    word, count = line.split(' ', 1)
    try:
        count = int(count)
        word2count[word] = word2count.get(word, 0) + count
    except ValueError:    # count was not a number
        pass

# sort
sorted_word2count = sorted(word2count.items(), key=itemgetter(1), reverse=True)

# write the top 3 sequences
for word, count in sorted_word2count[0:3]:
    print '%s\t%s'% (word, count)

非常感谢任何帮助,谢谢!

更新:

HDFS-site.xml中:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

          <name>dfs.replication</name>

          <value>1</value>

</property>

</configuration>

mapred-site.xml中:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

          <name>mapred.job.tracker</name>

          <value>localhost:54311</value>

</property>

</configuration>

2 个答案:

答案 0 :(得分:5)

您缺少很多配置,您需要定义目录等。见这里:

http://wiki.apache.org/hadoop/QuickStart

分布式操作就像上面描述的伪分布式操作一样,除了:

  1. 在conf / hadoop-site.xml中的fs.default.name和mapred.job.tracker的值中指定主服务器的主机名或IP地址。这些被指定为host:port pairs。
  2. 在conf / hadoop-site.xml中指定dfs.name.dir和dfs.data.dir的目录。它们分别用于在主节点和从节点上保存分布式文件系统数据。请注意,dfs.data.dir可能包含以空格或逗号分隔的目录名列表,因此数据可能存储在多个设备上。
  3. 在conf / hadoop-site.xml中指定mapred.local.dir。这决定了写入临时MapReduce数据的位置。它也可能是一个目录列表。
  4. 在conf / mapred-default.xml中指定mapred.map.tasks和mapred.reduce.tasks。根据经验,对于mapred.map.tasks,使用10倍的从处理器,对mapred.reduce.tasks使用2倍于从处理器的数量。
  5. 列出conf / slaves文件中的所有从属主机名或IP地址,每行一个,并确保jobtracker位于指向jobtracker节点的/ etc / hosts文件中

答案 1 :(得分:0)

好吧,我现在坚持了两天相同的问题。乔在他的other post中提供的解决方案对我很有用..

作为解决问题的方法,我建议:

1)盲目地,盲目地关注如何设置单个节点集群here的指示(我假设你已经这样做了)

2)如果您遇到java.io.IOException的任何地方:不兼容的namespaceIDs错误(如果您检查日志,您将找到它),看看here

3)在您的示例运行中删除所有来自您的命令的双引号

./bin/hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar \
    -input "p1input/*" \
    -output p1output \
    -mapper p1mapper.py \
    -reducer p1reducer.py \
    -file /Users/Tish/Desktop/HW1/p1mapper.py \
    -file /Users/Tish/Desktop/HW1/p1reducer.py

这太荒谬了,但这是我整天坚持了两天的时间