无法使用Spark-Scala从Redshift读取数据

时间:2017-05-05 08:54:49

标签: scala authentication apache-spark amazon-redshift

我正在尝试从Amazon Redshift读取数据,但收到以下错误:

Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: You must specify a method for authenticating Redshift's connection to S3 (aws_iam_role, forward_spark_s3_credentials, or temporary_aws_*. For a discussion of the differences between these options, please see the README.
at scala.Predef$.require(Predef.scala:224)
at com.databricks.spark.redshift.Parameters$MergedParameters.<init>(Parameters.scala:91)
at com.databricks.spark.redshift.Parameters$.mergeParameters(Parameters.scala:83)
at com.databricks.spark.redshift.DefaultSource.createRelation(DefaultSource.scala:50)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)

使用以下代码读取数据:

 val session = SparkSession.builder()
    .master("local")
    .appName("POC")
    .getOrCreate()

  session.conf.set("fs.s3n.awsAccessKeyId", "<access_key>")
  session.conf.set("fs.s3n.awsSecretAccessKey", "<secret-key>")

  val eventsDF = session.read
    .format("com.databricks.spark.redshift")
    .option("url","<jdbc_url>" )
    .option("dbtable", "test.account")
    .option("tempdir", "s3n://testBucket/data")
    .load()
  eventsDF.show()

build.sbt:

name:= "Redshift_read"

scalaVersion:= "2.11.8"

version := "1.0"

val sparkVersion = "2.1.0"

    libraryDependencies ++= Seq(
      "org.apache.spark" %% "spark-core" % sparkVersion,
      "org.apache.spark" %% "spark-sql" % sparkVersion,
      "com.databricks" %% "spark-redshift" % "3.0.0-preview1",
      "com.amazonaws"     %   "aws-java-sdk"    % "1.11.0"
    )

任何人都可以帮助我,我缺少什么?我已经在spark中提供了访问密钥和密钥,但仍然是抛出错误。

0 个答案:

没有答案