我怎样才能在emr-5.2.1上找到写入dynamodb的火花?

时间:2017-01-19 06:20:46

标签: scala apache-spark amazon-dynamodb emr

根据this article here,当我创建一个使用spark将数据传递给dynamodb的aws emr集群时,我需要在前面加上这一行:

spark-shell --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar

此行出现在众多参考文献中,包括from the amazon devs themselves。但是,当我使用添加的create-cluster标记运行--jars时,出现此错误:

Exception in thread "main" java.io.FileNotFoundException: File file:/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:616)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:829)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:606)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:431)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
...

this SO question有一个答案,该库应该包含在emr-5.2.1中,所以我尝试运行我的代码而没有额外的--jars标志:

ERROR ApplicationMaster: User class threw exception: java.lang.NoClassDefFoundError: org/apache/hadoop/dynamodb/DynamoDBItemWritable
java.lang.NoClassDefFoundError: org/apache/hadoop/dynamodb/DynamoDBItemWritable
at CopyS3ToDynamoApp$.main(CopyS3ToDynamo.scala:113)
at CopyS3ToDynamoApp.main(CopyS3ToDynamo.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.dynamodb.DynamoDBItemWritable
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

只是为了笑容,我尝试了通过在--driver-class-path,/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar,中加入我的步骤来解决该问题的其他答案所提出的替代方案,并得知:

Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2702)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2715)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)

无法找到s3a.S3AFileSystem似乎是一个很大的问题,特别是因为我有其他工作从s3读取就好了,但显然从s3读取并写入发电机是棘手的。关于如何解决这个问题的任何想法?

更新:我认为找不到s3是因为我重写了类路径并删除了所有其他库,所以我更新了类路径:

class_path = "/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:" \
             "/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:" \
             "/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:" \
             "/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:" \
             "/usr/share/aws/emr/ddb/lib/*"

现在我收到了这个错误:

 diagnostics: User class threw exception: java.lang.NoClassDefFoundError: org/apache/hadoop/dynamodb/DynamoDBItemWritable
 ApplicationMaster host: 10.178.146.133
 ApplicationMaster RPC port: 0
 queue: default
 start time: 1484852731196
 final status: FAILED
 tracking URL: http://ip-10-178-146-68.syseng.tmcs:20888/proxy/application_1484852606881_0001/

因此,看起来该库不在AWS文档指定的位置。有没有人得到这个工作?

2 个答案:

答案 0 :(得分:2)

好的,弄清楚这个花了我几天,所以我会饶恕旁边的人问这个问题。

这些方法失败的原因是AWS人员指定的路径在emr 5.2.1集群上不存在(可能根本不在任何emr 5.0集群上)。

相反,我下载了emr-dynamodb-hadoop jar from Maven的4.2版本。

因为jar不在emr群集上,所以您需要将它包含在jar中。如果您正在使用sbt,则可以使用sbt assembly。如果您不想让这样的单片jar继续运行(并且必须弄清楚netbeans版本1.7和1.8之间的冲突解决方案),那么您也可以just merge jars作为构建过程的一部分。通过这种方式,您可以为您的emr步骤添加一个jar,您可以将其放在s3上,以便轻松create-cluster基于按需点火作业。

答案 1 :(得分:0)

我已经使用https://github.com/audienceproject/spark-dynamodb将spark连接到emr上的dynamodb。如果尝试使用Scala 2.12.X vesion,会有很多问题,以下是配置。

Spark 2.3.3,Scala 2.11.12,spark-dynamodb_2.11 0.4.4,guva 14.0.1。

这在EMR emr-5.22.0上没有任何问题。

示例代码。

def main (args: Array[String] ): Unit = {

  val spark = SparkSession.builder
  .appName ("DynamoController1")
  .master ("local[*]")
  .getOrCreate

  val someData = Seq (
  Row (313080991, 1596115553835L, "U", "Insert", "455 E 520th Ave qqqqq", "AsutoshC", "paridaC", 1592408065),
  Row (313080881, 1596115553835L, "I", "Insert", "455 E 520th Ave qqqqq", "AsutoshC", "paridaC", 1592408060),
  Row (313080771, 1596115664774L, "U", "Update", "455 E 520th Ave odisha", "NishantC", "KanungoC", 1592408053)
  )

  val candidate_schema = StructType (Array (StructField ("candidateId", IntegerType, false), StructField ("repoCreateDate", LongType, true),
  StructField ("accessType", StringType, true), StructField ("action", StringType, true), StructField ("address1", StringType, true)
  , StructField ("firstName", StringType, true), StructField ("lastName", StringType, true), StructField ("updateDate", LongType, true) ) )

  var someDF = spark.sqlContext.createDataFrame (
  spark.sqlContext.sparkContext.parallelize (someData),
  StructType (candidate_schema) )

  someDF = someDF.withColumn ("datetype_timestamp", to_timestamp (col ("updateDate") ) )
  someDF.createOrReplaceTempView ("rawData")

  val sourceCount = someDF.select (someDF.schema.head.name).count
  logger.info (s"step [1.0.1] Fetched $sourceCount")
  someDF.show ()

  val compressedDF: DataFrame = spark.sqlContext.sql (s"Select candidateId, repoCreateDate,accessType,action,address1,firstName, lastName,updateDate from rawData ")
  compressedDF.show (20);
  compressedDF.write.dynamodb ("xcloud.Candidate")

  var dynamoDf = spark.read.dynamodb ("xcloud.Candidate")
  var dynamoDf = spark.read.dynamodbAs[candidate_schema] ("xcloud.Candidate")
  dynamoDf.show ();

}

希望这对某人有帮助!