无法从HDFS复制到S3A

时间:2019-08-19 10:38:17

标签: java hadoop amazon-s3 hdfs

我有一堂课,可以使用Apache FileUtil将目录内容从一个位置复制到另一个位置:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.LocatedFileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.RemoteIterator;

class Folder {
    private final FileSystem fs;
    private final Path pth;

    // ... constructors and other methods

    /**
     * Copy contents (files and files in subfolders) to another folder.
     * Merges overlapping folders
     * Overwrites already existing files
     * @param destination Folder where content will be moved to
     * @throws IOException If fails
     */
    public void copyFilesTo(final Folder destination) throws IOException {
        final RemoteIterator<LocatedFileStatus> iter = this.fs.listFiles(
            this.pth,
            true
        );
        final URI root = this.pth.toUri();
        while (iter.hasNext()) {
            final Path source = iter.next().getPath();
            FileUtil.copy(
                this.fs,
                source,
                destination.fs,
                new Path(
                    destination.pth,
                    root.relativize(source.toUri()).toString()
                ),
                false,
                true,
                this.fs.getConf()
            );
        }
    }
}

此类在单元测试中可以与本地(file:///)目录配合使用, 但是当我尝试在Hadoop集群中使用它将文件从HDFS(hdfs:///tmp/result)复制到Amazon S3(s3a://mybucket/out)时,它不会复制任何内容,也不会引发错误,只是默默地跳过复制。

当我将同一类(用于HDFS或S3a文件系统)用于其他目的时,它可以正常工作,因此此处的配置和fs引用应该可以。

我做错了什么?如何将文件从HDFS正确复制到S3A?

我正在使用Hadoop 2.7.3


更新 我在copyFilesTo方法中添加了更多日志,以记录rootsourcetarget变量(并提取了rebase()方法而不更改代码):

    /**
     * Copy contents (files and files in subfolders) to another folder.
     * Merges overlapping folders
     * Overwrites already existing files
     * @param dst Folder where content will be moved to
     * @throws IOException If fails
     */
    public void copyFilesTo(final Folder dst) throws IOException {
        Logger.info(
            this, "copyFilesTo(%s): from %s fs=%s",
            dst, this, this.hdfs
        );
        final RemoteIterator<LocatedFileStatus> iter = this.hdfs.listFiles(
            this.pth,
            true
        );
        final URI root = this.pth.toUri();
        Logger.info(this, "copyFilesTo(%s): root=%s", dst, root);
        while (iter.hasNext()) {
            final Path source = iter.next().getPath();
            final Path target = Folder.rebase(dst.path(), this.path(), source);
            Logger.info(
                this, "copyFilesTo(%s): src=%s target=%s",
                dst, source, target
            );
            FileUtil.copy(
                this.hdfs,
                source,
                dst.hdfs,
                target,
                false,
                true,
                this.hdfs.getConf()
            );
        }
    }

    /**
     * Change the base of target URI to new base, using root
     * as common path.
     * @param base New base
     * @param root Common root
     * @param target Target to rebase
     * @return Path with new base
     */
    static Path rebase(final Path base, final Path root, final Path target) {
        return new Path(
            base, root.toUri().relativize(target.toUri()).toString()
        );
    }

在集群中运行后,我得到了以下日志:

io.Folder: copyFilesTo(hdfs:///tmp/_dst): from hdfs:///tmp/_src fs=DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_182008924_1, ugi=hadoop (auth:SIMPLE)]]
io.Folder: copyFilesTo(hdfs:///tmp/_dst): root=hdfs:///tmp/_src
INFO io.Folder: copyFilesTo(hdfs:///tmp/_dst): src=hdfs://ip-172-31-2-12.us-east-2.compute.internal:8020/tmp/_src/one.file target=hdfs://ip-172-31-2-12.us-east-2.compute.internal:8020/tmp/_src/one.file

我在rebase()方法中本地化了错误的代码,在EMR群集中运行时,该代码无法正常工作,因为RemoteIterator以远程格式hdfs://ip-172-31-2-12.us-east-2.compute.internal:8020/tmp/_src/one.file返回URI,但是此方法的格式为{ {1}},这就是为什么它在hdfs:///tmp/_src/one.file FS本地工作的原因。

2 个答案:

答案 0 :(得分:1)

我没有发现任何明显错误。

  1. 是hdfs-hdfs还是s3a-s3a?
  2. 升级您的hadoop版本; 2.7.x已过时,尤其是S3A代码。解决这个问题的可能性不大,但可以避免其他问题。升级后,切换到fast upload,它将对大文件进行增量更新;当前,您的代码会将每个文件保存到/ tmp某个位置,然后在close()调用中将其上传。
  3. 打开org.apache.hadoop.fs.s3a模块的日志记录,并查看其内容

答案 1 :(得分:0)

我不确定这是否是最佳且完全正确的解决方案,但它对我有用。这个想法是在重新设置基础之前修复本地路径的主机和端口,有效的rebase方法将是:

    /**
     * Change the base of target URI to new base, using root
     * as common path.
     * @param base New base
     * @param root Common root
     * @param target Target to rebase
     * @return Path with new base
     * @throws IOException If fails
     */
    @SuppressWarnings("PMD.DefaultPackage")
    static Path rebase(final Path base, final Path root, final Path target)
        throws IOException {
        final URI uri = target.toUri();
        try {
            return new Path(
                new Path(
                    new URIBuilder(base.toUri())
                        .setHost(uri.getHost())
                        .setPort(uri.getPort())
                        .build()
                ),
                new Path(
                    new URIBuilder(root.toUri())
                        .setHost(uri.getHost())
                        .setPort(uri.getPort())
                        .build()
                        .relativize(uri)
                )
            );
        } catch (final URISyntaxException err) {
            throw new IOException("Failed to rebase", err);
        }
    }