JobTracker - 高内存和本机线程使用

时间:2015-02-10 17:20:58

标签: google-hadoop

我们正在使用HDFS默认文件系统在GCE上运行hadoop,以及从/向GCS输入/输出数据。

Hadoop版本:1.2.1 连接器版本:com.google.cloud.bigdataoss:gcs-connector:1.3.0-hadoop1

观察到的行为:JT会在等待状态下累积线程,导致OOM:

2015-02-06 14:15:51,206 ERROR org.apache.hadoop.mapred.JobTracker: Job initialization failed:
java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:714)
        at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1371)
        at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.initialize(AbstractGoogleAsyncWriteChannel.java:318)
        at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.create(GoogleCloudStorageImpl.java:275)
        at com.google.cloud.hadoop.gcsio.CacheSupplementedGoogleCloudStorage.create(CacheSupplementedGoogleCloudStorage.java:145)
        at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.createInternal(GoogleCloudStorageFileSystem.java:184)
        at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.create(GoogleCloudStorageFileSystem.java:168)
        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.<init>(GoogleHadoopOutputStream.java:77)
        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.create(GoogleHadoopFileSystemBase.java:655)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:564)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:545)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:452)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:444)
        at org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:1860)
        at org.apache.hadoop.mapred.JobInProgress$3.run(JobInProgress.java:709)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:706)
        at org.apache.hadoop.mapred.JobTracker.initJob(Jobenter code hereTracker.java:3890)
        at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

查看JT日志后,我发现了这些警告:

2015-02-06 14:30:17,442 WARN org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from primary datanode xx.xxx.xxx.xxx:50010
java.io.IOException: Call to /xx.xxx.xxx.xxx:50020 failed on local exception: java.io.IOException: Couldn't set up IO streams
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150)
        at org.apache.hadoop.ipc.Client.call(Client.java:1118)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
        at com.sun.proxy.$Proxy10.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:414)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:392)
        at org.apache.hadoop.hdfs.DFSClient.createClientDatanodeProtocolProxy(DFSClient.java:201)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3317)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2783)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2987)
Caused by: java.io.IOException: Couldn't set up IO streams
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
        at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249)
        at org.apache.hadoop.ipc.Client.call(Client.java:1093)
        ... 9 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:714)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:635)
        ... 12 more

这似乎与hadoop bug记者类似:https://issues.apache.org/jira/browse/MAPREDUCE-5606

我尝试通过禁用将作业日志保存到输出路径来提出解决方案,并以牺牲日志为代价解决了问题:)

我还在JT上运行jstack,它显示了数百个WAITING或TIMED_WAITING线程:

pool-52-thread-1" prio=10 tid=0x00007feaec581000 nid=0x524f in Object.wait() [0x00007fead39b3000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0x000000074d86ba60> (a java.io.PipedInputStream)
        at java.io.PipedInputStream.read(PipedInputStream.java:327)
        - locked <0x000000074d86ba60> (a java.io.PipedInputStream)
        at java.io.PipedInputStream.read(PipedInputStream.java:378)
        - locked <0x000000074d86ba60> (a java.io.PipedInputStream)
        at com.google.api.client.util.ByteStreams.read(ByteStreams.java:181)
        at com.google.api.client.googleapis.media.MediaHttpUploader.setContentAndHeadersOnCurrentReque
st(MediaHttpUploader.java:629)
        at com.google.api.client.googleapis.media.MediaHttpUploader.resumableUpload(MediaHttpUploader.
java:409)
        at com.google.api.client.googleapis.media.MediaHttpUploader.upload(MediaHttpUploader.java:336)
        at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(Abstr
actGoogleClientRequest.java:419)
        at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(Abstr
actGoogleClientRequest.java:343)
        at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogl
eClientRequest.java:460)
        at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel$UploadOperation.run(AbstractGo
ogleAsyncWriteChannel.java:354)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
   Locked ownable synchronizers:
        - <0x000000074d864918> (a java.util.concurrent.ThreadPoolExecutor$Worker)

似乎JT很难通过GCS Connector与GCS保持通信。

请指教,

谢谢

1 个答案:

答案 0 :(得分:0)

目前,Hadoop的GCS连接器中的每个开放FSDataOutputStream都会消耗一个线程,直到它被关闭,因为一个单独的线程需要运行&#34; resumable&#34;当OutputStream的用户间歇性地写入字节时,HttpRequests。在大多数情况下,(例如在单独的Hadoop任务中),只有一个长寿命的输出流,可能还有一些用于编写小元数据/标记文件等的短寿命输出流。

一般情况下,您遇到的OOM有两种可能的原因:

  1. 你有很多排队的工作;每个提交的作业都有一个未关闭的OutputStream,因此会消耗一个等待的&#34;线。但是,既然你提到你只需排队约10个工作,这不应该是根本原因。
  2. 有什么东西导致泄漏&#34;最初在logSubmitted中创建并添加到fileManager的PrintWriter对象。通常,终端事件(如logFinished将正确关闭()所有PrintWriters,然后通过markCompleted从地图中删除它们,但理论上它们可能是这里或那里的错误,可能导致其中一个OutputStreams泄漏而不是靠近()&#39; d。例如,虽然我没有机会验证这个断言,但似乎IOException试图做像logMetaInfo这样的事情将&#34; removeWriter&#34; { {3}}。
  3. 我已经验证过,至少在正常情况下,OutputStream似乎正确关闭,并且我的示例JobTracker在成功运行大量作业后显示了一个干净的jstack。

    TL; DR:关于为什么某些资源可能泄漏并最终阻止创建必要的线程,有一些工作理论。您应该考虑在此期间将hadoop.job.history.user.location更改为某个HDFS位置,以便在没有将其放置在GCS上的情况下保留作业日志。