我无法从EMR读取存储在AWS S3上我的存储桶中的csv文件。
我已经阅读了很多有关它的文章,并做了以下工作以使其起作用:
我认为从普通帐户的EMR中查询S3很简单(因为在定义文件系统并提供AWS凭证后它可以在本地运行),但是当我运行时:
df = spark.read.option("delimiter", ",").csv("s3://{0}/{1}/*.csv".format(bucket_name, power_prod_key), header = True)
什么也没有发生,没有任何异常,集群继续运行,但是在这一行之后什么也不会执行(我也曾尝试指定一个文件而不是“ * .csv”,但它确实如此)。
我使用aws控制台创建了集群,但这是导出的cli:
aws emr create-cluster
--applications Name=Hadoop Name=Hive Name=Pig Name=Hue Name=Spark
--ec2-attributes '{"InstanceProfile":"EMR_EC2_DefaultRole","SubnetId":"subnet-3482b47e","EmrManagedSlaveSecurityGroup":"sg-05c284d83c1307807","EmrManagedMasterSecurityGroup":"sg-01cd4e90f09dff3ad"}'
--release-label emr-5.21.0
--log-uri 's3n://aws-logs-597071303168-us-east-1/elasticmapreduce/'
--steps '[{"Args":["spark-submit","--deploy-mode","cluster","--py-files","s3://powercaster-bct/code/func.zip","s3://powercaster-bct/code/PowerProdPrediction.py","s3://powercaster-bct/power-production/*.csv","s3://powercaster-bct/results/rnd-frst-predictions.csv","s3://powercaster-bct/results/rnd-frst-target.csv"],"Type":"CUSTOM_JAR","ActionOnFailure":"TERMINATE_CLUSTER","Jar":"command-runner.jar","Properties":"","Name":"Spark application"}]'
--instance-groups '[{"InstanceCount":1,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"VolumeType":"gp2"},"VolumesPerInstance":1}]},"InstanceGroupType":"MASTER","InstanceType":"m4.large","Name":"Master - 1"}]'
--configurations '[{"Classification":"spark-env","Properties":{},"Configurations":[{"Classification":"export","Properties":{"PYSPARK_PYTHON":"/usr/bin/python3"}}]}]'
--auto-terminate
--auto-scaling-role EMR_AutoScaling_DefaultRole
--ebs-root-volume-size 10
--service-role EMR_DefaultRole
--enable-debugging
--name 'My cluster'
--scale-down-behavior TERMINATE_AT_TASK_COMPLETION
--region us-east-1
我应该提供一些特定的hadoop配置来定义文件系统还是以某种方式提供我的凭据吗?
知道为什么我不能将S3链接到EMR吗?