我正在使用Docker通过pyspark开发本地AWS胶水作业。 song_data.py文件包含AWS粘合作业。我使用我的AWS凭证配置了spark会话,但是以下错误提示其他情况。在文件中,我使用胶水上下文方法设置了4种不同的try语句来创建动态框架。这是胶水作业文件(song_data.py):
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark import SQLContext
from pyspark.sql import SparkSession
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from configparser import ConfigParser
from pyspark import SparkConf
config = ConfigParser()
config.read_file(open('/app/config/aws.cfg'))
conf = (
SparkConf()
.set('spark.hadoop.fs.s3a.access.key', config.get('AWS', 'KEY'))
.set('spark.hadoop.fs.s3a.secret.key', config.get('AWS', 'SECRET'))
.set("fs.s3.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
)
sc = SparkContext(conf=conf)
spark = SparkSession(sc)
glueContext = GlueContext(spark)
conf_dict = spark.sparkContext.getConf().getAll()
print(conf_dict)
try:
print('Attempt 1: spark.read.json')
url = 's3a://sparkify-dend-analytics/song_data/A/A/A/TRAAAAW128F429D538.json'
spark.read.json(url).show(1)
except Exception as e:
print(e)
try:
print('Attempt 2: create_dynamic_frame.from_options')
song_df = glueContext.create_dynamic_frame.from_options(
connection_type='s3',
connection_options={"paths": [ "s3a://sparkify-dend-analytics/song_data/"]},
format='json')
print ('Count: ', song_df.count())
print('Schema: ')
song_df.printSchema()
except Exception as e:
print(e)
try:
print('Attempt 3: create_dynamic_frame.from_catalog')
song_df = glueContext.create_dynamic_frame.from_catalog(
database='sparkify',
table_name='song_data')
print ('Count: ', song_df.count())
print('Schema: ')
song_df.printSchema()
except Exception as e:
print(e)
try:
print('Attempt 4: create_dynamic_frame_from_catalog')
song_df = glueContext.create_dynamic_frame_from_catalog(
database='sparkify',
table_name='song_data')
print ('Count: ', song_df.count())
print('Schema: ')
song_df.printSchema()
except Exception as e:
print(e)
我用于运行粘合作业的命令是:gluesparksubmit glue_etl_scripts/song_data.py --JOB-NAME test
。这是每个try语句的错误输出的简短版本:
尝试1:spark.read.json()
WARN FileStreamSink: Error while looking for metadata directory.
An error occurred while calling o87.json.
: org.apache.hadoop.fs.s3a.AWSClientIOException: doesBucketExist on sparkify-dend-analytics:
com.amazonaws.AmazonClientException: No AWS Credentials provided by
DefaultAWSCredentialsProviderChain : com.amazonaws.SdkClientException: Unable to load AWS
credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to
load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and
AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load
AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey),
WebIdentityTokenCredentialsProvider: You must specify a value for roleArn and roleSessionName,
com.amazonaws.auth.profile.ProfileCredentialsProvider@401a5902: profile file cannot be null,
com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper@2b6e2cf9: Failed to connect to service
endpoint: ]: No AWS Credentials provided by DefaultAWSCredentialsProviderChain :
com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain:
[EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment
variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)),
SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties
(aws.accessKeyId and aws.secretKey), WebIdentityTokenCredentialsProvider: You must specify a
value for roleArn and roleSessionName,
com.amazonaws.auth.profile.ProfileCredentialsProvider@401a5902: profile file cannot be null,
com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper@2b6e2cf9: Failed to connect to service
endpoint: ]
尝试2:create_dynamic_frame.from_options()
WARN InstanceMetadataServiceResourceFetcher: Fail to retrieve token
com.amazonaws.SdkClientException: Failed to connect to service endpoint:
....
Caused by: java.net.ConnectException: Connection refused (Connection refused)
....
An error occurred while calling o125.getDynamicFrame.
: org.apache.hadoop.fs.s3a.AWSClientIOException: (same AWSClientIOException as above)
.....
Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain:
尝试3:create_dynamic_frame.from_catalog()
WARN InstanceMetadataServiceResourceFetcher: Fail to retrieve token
com.amazonaws.SdkClientException: Failed to connect to service endpoint:
.....
Caused by: java.net.ConnectException: Connection refused (Connection refused)
尝试4:create_dynamic_frame_from_catalog()
与尝试3相同
当我打印出spark会话的配置命令时,aws访问和密钥是有效的。
这是运行spark.sparkContext.getConf().getAll()
时输出的spark配置字典:
[('spark.app.name', 'song_data.py'), ('spark.driver.host', '73d3647fdf5b'),
('spark.hadoop.fs.s3a.secret.key', 'xxxxxxx'), ('spark.submit.pyFiles', '/glue/aws-glue-
libs/PyGlue.zip'), ('spark.executor.id', 'driver'), ('spark.driver.extraClassPath', '/glue/aws-
glue-libs/jarsv1/*'), ('spark.app.id', 'local-1593063861647'), ('spark.driver.port', '40655'),
('spark.executor.extraClassPath', '/glue/aws-glue-libs/jarsv1/*'), ('spark.rdd.compress',
'True'), ('spark.hadoop.fs.s3a.access.key', 'xxxxxxx'), ('spark.files', 'file:///glue/aws-glue-
libs/PyGlue.zip'), ('spark.serializer.objectStreamReset', '100'), ('spark.master', 'local[*]'),
('spark.submit.deployMode', 'client'), ('fs.s3.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem')]
让我知道是否需要Dockerfile或任何其他代码。