Java从HDFS传输到S3

时间:2015-08-07 12:08:57

标签: java amazon-s3 hdfs

我想用Java将文件从HDFS传输到S3。有些文件可能很大,所以我不想在将文件上传到S3之前在本地下载文件。在Java中有没有办法做到这一点?

这就是我现在所拥有的(一段将本地文件上传到S3的代码)。我无法真正使用它,因为使用File对象意味着我将它放在我的硬盘上。

File f = new File("/home/myuser/test");

TransferManager transferManager  = new TransferManager(credentials);
MultipleFileUpload upload = transferManager.uploadDirectory("mybucket","test_folder",f,true);

由于

1 个答案:

答案 0 :(得分:2)

我想出了上传部分。

AWSCredentials credentials = new BasicAWSCredentials(
            "whatever",
            "whatever");

    File f = new File("/home/myuser/test");

    TransferManager transferManager  = new TransferManager(credentials);

    //+upload from HDFS to S3
    Configuration conf = new Configuration();
    // set the hadoop config files
    conf.addResource(new Path("/etc/hadoop/conf/core-site.xml"));
    conf.addResource(new Path("/etc/hadoop/conf/hdfs-site.xml"));

    Path path = new Path("hdfs://my_ip_address/user/ubuntu/test/test.txt");
    FileSystem fs = path.getFileSystem(conf);
    FSDataInputStream inputStream = fs.open(path);
    ObjectMetadata objectMetadata =  new ObjectMetadata();
    Upload upload = transferManager.upload("xpatterns-deployment-ubuntu", "test_cu_jmen3", inputStream, objectMetadata);
    //-upload from HDFS to S3

    try {
        upload.waitForCompletion();
    } catch (InterruptedException e) {
        e.printStackTrace();
    }

有关如何为下载做类似事情的任何想法?我还没有在TransferManager中找到任何可以使用上述代码中的流的download()方法。