纱线上的mapreduce作业以exitCode退出:-1000,因为在src文件系统上更改了资源

时间:2019-03-19 08:30:28

标签: azure-storage yarn hadoop2 hdinsight

    Application application_1552978163044_0016 failed 5 times due to AM Container for appattempt_1552978163044_0016_000005 exited with exitCode: -1000

诊断:

  

java.io.IOException:资源   abfs://xxx@xxx.dfs.core.windows.net/hdp/apps/2.6.5.3006-29/mapreduce/mapreduce.tar.gz   更改src文件系统(预期为1552949440000,为1552978240000   尝试失败。应用程序失败。

1 个答案:

答案 0 :(得分:0)

仅基于异常信息,这似乎是由于Azure存储无法保留复制文件的原始时间戳而引起的。我搜索了一种变通方法,建议您更改yarn-common的源代码以在复制文件时禁用时间戳记检查的代码块,以避免引发异常以使MR作业连续工作。

这是yarn-common最新版本中的source code,它检查复制文件的时间戳并引发异常。

/** #L255
   * Localize files.
   * @param destination destination directory
   * @throws IOException cannot read or write file
   * @throws YarnException subcommand returned an error
   */
  private void verifyAndCopy(Path destination)
      throws IOException, YarnException {
    final Path sCopy;
    try {
      sCopy = resource.getResource().toPath();
    } catch (URISyntaxException e) {
      throw new IOException("Invalid resource", e);
    }
    FileSystem sourceFs = sCopy.getFileSystem(conf);
    FileStatus sStat = sourceFs.getFileStatus(sCopy);
    if (sStat.getModificationTime() != resource.getTimestamp()) {
      throw new IOException("Resource " + sCopy +
          " changed on src filesystem (expected " + resource.getTimestamp() +
          ", was " + sStat.getModificationTime());
    }
    if (resource.getVisibility() == LocalResourceVisibility.PUBLIC) {
      if (!isPublic(sourceFs, sCopy, sStat, statCache)) {
        throw new IOException("Resource " + sCopy +
            " is not publicly accessible and as such cannot be part of the" +
            " public cache.");
      }
    }

    downloadAndUnpack(sCopy, destination);
  }