Pyspark从S3存储桶的子目录中读取所有JSON文件

时间:2020-10-17 22:19:04

标签: json amazon-web-services hadoop amazon-s3 pyspark

我正在尝试从名为world的S3存储桶中的名为hello的子目录中读取JSON文件。当我使用boto3列出该目录的所有对象时,可以看到如下所示的几个零件文件(可能由spark作业创建)。

world/
world/_SUCCESS
world/part-r-00000-....json
world/part-r-00001-....json
world/part-r-00002-....json
world/part-r-00003-....json
world/part-r-00004-....json
world/part-r-00005-....json
world/part-r-00006-....json
world/part-r-00007-....json

我编写了以下代码来读取所有这些文件。

spark_session = SparkSession
            .builder
            .config(
            conf=SparkConf().setAll(spark_config).setAppName(app_name)
        ).getOrCreate()
hadoop_conf = spark_session._jsc.hadoopConfiguration()
hadoop_conf.set("fs.s3a.server-side-encryption-algorithm", "AES256")
hadoop_conf.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider")
hadoop_conf.set("fs.s3a.access.key", "my-aws-access-key")
hadoop_conf.set("fs.s3a.secret.key", "my-aws-secret-key")
hadoop_conf.set("com.amazonaws.services.s3a.enableV4", "true")

df = spark_session.read.json("s3a://hello/world/")

并出现以下错误

py4j.protocol.Py4JJavaError: An error occurred while calling o98.json.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: , AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: 
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
    at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:976)
    at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:956)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:892)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:557)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    at scala.collection.immutable.List.flatMap(List.scala:355)
    at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:392)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.base/java.lang.Thread.run(Thread.java:834)

我也尝试过使用"s3a://hello/world/*""s3a://hello/world/*.json",但仍然遇到相同的错误。

仅供参考,我正在使用以下版本的工具:

pyspark 2.4.5
com.amazonaws:aws-java-sdk:1.7.4
org.apache.hadoop:hadoop-aws:2.7.1
org.apache.hadoop:hadoop-common:2.7.1

有人可以帮我吗?

1 个答案:

答案 0 :(得分:1)

似乎您用于访问存储桶/文件夹的凭据没有必需的访问权限。

请检查以下内容

  1. 您的应用程序代码中指定的凭据或角色
  2. 附加到Amazon Elastic Compute Cloud(Amazon EC2)的策略 实例配置文件角色
  3. Amazon S3 VPC终端节点政策
  4. Amazon S3源和目标存储桶策略

一些可以用来快速调试的东西 在群集的主节点上,尝试使用

function search($request) {
    $user = DB::table('subscribers')->get();
    $input = $request->input('query');
    $fillter = [];
    
    foreach ($user as $userData) {
    
        if (strtoupper($userData->firstName) == strtoupper($input)) {
            $firstUser = Subscriber::where('firstName', $input)->get();
            array_push($fillter, $firstUser);
    
        } else if (strtoupper($userData->lastName) == strtoupper($input)) {
            $lastUser = Subscriber::where('lastName', $input)->get();
            array_push($fillter, $lastUser);
    
        } else if (strtoupper($userData->address) == strtoupper($input)) {
            $addressUser = Subscriber::where('address', $input)->get();
            array_push($fillter, $addressUser);
    
        }
    }
    
    if (!empty($fillter)) {
        return view('pages/results', compact('fillter'));
    } else {
        return view('pages/empty', compact('fillter'));
    }
}

如果抛出错误,请尝试通过以下链接解决访问控制问题 https://aws.amazon.com/premiumsupport/knowledge-center/emr-s3-403-access-denied/