在驾驶员

时间:2017-10-31 20:32:26

标签: java multithreading amazon-web-services apache-spark elastic-map-reduce

我有一个Spark作业,每次运行在S3上处理几个文件夹,并将其状态存储在DynamoDB上。换句话说,我们每天运行一次作业,它会查找由另一个作业添加的新文件夹,逐个转换它们并将状态写入DynamoDB。这是粗略的伪代码:

object App {
  val allFolders = S3Folders.list()
  val foldersToProcess = DynamoDBState.getFoldersToProcess(allFolders)
  Transformer.run(foldersToProcess)
}

object Transformer {
  def run(folders: List[String]): Unit = {
    val sc = new SparkContext()
    folders.foreach(process(sc, _))
  }

  def process(sc: SparkContext, folder: String): Unit = ???  // transform and write to S3
}

如果S3Folders.list()返回相对少量的文件夹(最多几千),这种方法很有效,如果它返回更多(4-8K),我们经常会看到以下错误(乍一看无关与Spark):

17/10/31 08:38:20 ERROR ApplicationMaster: User class threw exception: shadeaws.SdkClientException: Failed to sanitize XML document destined for handler class shadeaws.services.s3.model.transform.XmlResponses
SaxParser$ListObjectsV2Handler
shadeaws.SdkClientException: Failed to sanitize XML document destined for handler class shadeaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler
        at shadeaws.services.s3.model.transform.XmlResponsesSaxParser.sanitizeXmlDocument(XmlResponsesSaxParser.java:214)
        at shadeaws.services.s3.model.transform.XmlResponsesSaxParser.parseListObjectsV2Response(XmlResponsesSaxParser.java:315)
        at shadeaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:88)
        at shadeaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:77)
        at shadeaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
        at shadeaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:31)
        at shadeaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70)
        at shadeaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1553)
        at shadeaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1271)
        at shadeaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
        at shadeaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
        at shadeaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
        at shadeaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
        at shadeaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
        at shadeaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
        at shadeaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
        at shadeaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4247)
        at shadeaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
        at shadeaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4188)
        at shadeaws.services.s3.AmazonS3Client.listObjectsV2(AmazonS3Client.java:865)
        at me.chuwy.transform.S3Folders$.com$chuwy$transform$S3Folders$$isGlacierified(S3Folders.scala:136)
        at scala.collection.TraversableLike$$anonfun$filterImpl$1.apply(TraversableLike.scala:248)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
        at scala.collection.TraversableLike$class.filterNot(TraversableLike.scala:267)
        at scala.collection.AbstractTraversable.filterNot(Traversable.scala:104)
        at me.chuwy.transform.S3Folders$.list(S3Folders.scala:112)
        at me.chuwy.transform.Main$.main(Main.scala:22)
        at me.chuwy.transform.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:637)
Caused by: shadeaws.AbortedException:
        at shadeaws.internal.SdkFilterInputStream.abortIfNeeded(SdkFilterInputStream.java:53)
        at shadeaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:81)
        at shadeaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
        at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
        at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
        at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
        at java.io.InputStreamReader.read(InputStreamReader.java:184)
        at java.io.BufferedReader.read1(BufferedReader.java:210)
        at java.io.BufferedReader.read(BufferedReader.java:286)
        at java.io.Reader.read(Reader.java:140)
        at shadeaws.services.s3.model.transform.XmlResponsesSaxParser.sanitizeXmlDocument(XmlResponsesSaxParser.java:186)
        ... 36 more

对于大量文件夹(~20K),这种情况一直发生,作业无法启动。

以前我们在getFoldersToProcessGetItem的每个文件夹allFolders 17/09/30 14:46:07 ERROR ApplicationMaster: User class threw exception: shadeaws.AbortedException: shadeaws.AbortedException: at shadeaws.internal.SdkFilterInputStream.abortIfNeeded(SdkFilterInputStream.java:51) at shadeaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:71) at shadeaws.event.ProgressInputStream.read(ProgressInputStream.java:180) at com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.ensureLoaded(ByteSourceJsonBootstrapper.java:489) at com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.detectEncoding(ByteSourceJsonBootstrapper.java:126) at com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.constructParser(ByteSourceJsonBootstrapper.java:215) at com.fasterxml.jackson.core.JsonFactory._createParser(JsonFactory.java:1240) at com.fasterxml.jackson.core.JsonFactory.createParser(JsonFactory.java:802) at shadeaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:109) at shadeaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:43) at shadeaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70) at shadeaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1503) at shadeaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1226) at shadeaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030) at shadeaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742) at shadeaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716) at shadeaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) at shadeaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) at shadeaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) at shadeaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at shadeaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2089) at shadeaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2065) at shadeaws.services.dynamodbv2.AmazonDynamoDBClient.executeGetItem(AmazonDynamoDBClient.java:1173) at shadeaws.services.dynamodbv2.AmazonDynamoDBClient.getItem(AmazonDynamoDBClient.java:1149) at me.chuwy.tranform.sdk.Manifest$.contains(Manifest.scala:179) at me.chuwy.tranform.DynamoDBState$$anonfun$getUnprocessed$1.apply(ProcessManifest.scala:44) at scala.collection.TraversableLike$$anonfun$filterImpl$1.apply(TraversableLike.scala:248) at scala.collection.immutable.List.foreach(List.scala:381) at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247) at scala.collection.TraversableLike$class.filterNot(TraversableLike.scala:267) at scala.collection.AbstractTraversable.filterNot(Traversable.scala:104) at me.chuwy.transform.DynamoDBState$.getFoldersToProcess(DynamoDBState.scala:44) at me.chuwy.transform.Main$.main(Main.scala:19) at me.chuwy.transform.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:637) 时都有非常相似但更频繁的错误,因此需要更长的时间:

AbortedException

我认为当前错误与XML解析或无效响应无关,而是源于Spark内部的一些竞争条件,因为:

  1. “采取国家”的时间和失败的可能性之间存在明显的联系
  2. 回溯具有潜在的InterruptedException,由吞噬Thread.sleep引起的AFAIK,这可能意味着JVM(spark-submit或甚至YARN)内部的某些内容为主线程调用command-runner.jar spark-submit --deploy-mode cluster --class ...
  3. 目前我正在使用EMR AMI 5.5.0,Spark 2.1.0和着色的AWS SDK 1.11.208,但与AWS SDK 1.10.75有类似的错误。

    我正在通过clear input id postid str7 text str3 referencedtext ref_postid 1 1 "XYZ ABC" "" . 1 2 "BCD ABC" "ABC" 1 1 3 "DCE" "" . 2 1 "XYZ" "" . 2 2 "ABC" "" . 2 3 "JKL" "" . 2 4 "JKL DEF" "JKL" 3 end 在EMR上部署此职位。

    有没有人知道这个异常源自何处以及如何修复它?

2 个答案:

答案 0 :(得分:0)

foreach不能保证顺序计算,它会将操作应用于RDD的每个元素,这意味着它将为每个元素实例化,进而可能使执行器不堪重负。

答案 1 :(得分:0)

问题是getFoldersToProcess是一个阻塞(且很长)的操作,阻止了SparkContext的实例化。 SpackContext本身应该向YARN发出有关自己的实例化的信号,如果在一定时间内没有帮助-YARN认为驱动程序节点已脱落并杀死了整个群集。