无法在centos7上启动cloudera docker

时间:2017-12-02 10:36:16

标签: docker cloudera

无法在Centos-7上启动cloudera容器。我在启动namenode时遇到问题。

$ tar xzf cloudera-quickstart-vm-*-docker.tar.gz

$ docker import - cloudera/quickstart:latest < *.tar

$ docker run --hostname=quickstart.cloudera --privileged=true -t -i cloudera/quickstart:latest /usr/bin/docker-quickstart

以下是namenode日志中的错误。我在这里失踪的是什么? 我在我的ubuntu机器上尝试这个,事情似乎在那里运行良好,不确定这是否与SELinux有关?禁用SELinux并仍然面临同样的问题。

2017-12-02 10:06:28,762 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: XAttrs enabled? true
2017-12-02 10:06:28,762 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Maximum size of an xattr: 16384
2017-12-02 10:06:28,819 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/in_use.lock acquired by nodename 479@quickstart.cloudera
2017-12-02 10:06:28,884 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current
2017-12-02 10:06:29,103 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/edits_inprogress_0000000000000000001 -> /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/edits_0000000000000000001-0000000000000005340
2017-12-02 10:06:29,105 ERROR org.apache.hadoop.hdfs.server.common.Storage: Error reported on storage directory Storage Directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
2017-12-02 10:06:29,105 WARN org.apache.hadoop.hdfs.server.common.Storage: About to remove corresponding storage: /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
2017-12-02 10:06:29,106 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for (journal JournalAndStream(mgr=FileJournalManager(root=/var/lib/hadoop-hdfs/cache/hdfs/dfs/name), stream=null))
java.lang.IllegalStateException: Unable to finalize edits file /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/current/edits_inprogress_0000000000000000001
        at org.apache.hadoop.hdfs.server.namenode.FileJournalManager.finalizeLogSegment(FileJournalManager.java:152)
        at org.apache.hadoop.hdfs.server.namenode.FileJournalManager.recoverUnfinalizedSegments(FileJournalManager.java:423)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply(JournalSet.java:624)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:621)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1478)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:827)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:686)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1125)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:789)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
Caused by: EINVAL: Invalid argument
        at org.apache.hadoop.io.nativeio.NativeIO.renameTo0(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:880)
        at org.apache.hadoop.hdfs.server.namenode.FileJournalManager.finalizeLogSegment(FileJournalManager.java:149)
        ... 16 more
2017-12-02 10:06:29,107 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Disabling journal JournalAndStream(mgr=FileJournalManager(root=/var/lib/hadoop-hdfs/cache/hdfs/dfs/name), stream=null)
2017-12-02 10:06:29,107 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for too many journals
2017-12-02 10:06:29,107 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Skipping jas JournalAndStream(mgr=FileJournalManager(root=/var/lib/hadoop-hdfs/cache/hdfs/dfs/name), stream=null) since it's disabled
2017-12-02 10:06:29,107 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: Gap in transactions. Expected to be able to read up until at least txid 1 but unable to find any edit logs containing txid 1
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkForGaps(FSEditLog.java:1617)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1575)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:704)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1125)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:789)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
2017-12-02 10:06:29,109 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2017-12-02 10:06:29,209 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2017-12-02 10:06:29,209 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2017-12-02 10:06:29,210 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2017-12-02 10:06:29,210 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.IOException: Gap in transactions. Expected to be able to read up until at least txid 1 but unable to find any edit logs containing txid 1
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkForGaps(FSEditLog.java:1617)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1575)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:704)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:318)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1125)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:789)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
2017-12-02 10:06:29,211 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-12-02 10:06:29,212 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at quickstart.cloudera/172.17.0.2
************************************************************/

1 个答案:

答案 0 :(得分:0)

找到一个解决方案,我正在研究的CentOS盒子是一个很老的版本。 yum update为我解决了这个问题。

希望如果你发现自己处于同样的境地,这会有所帮助。