连接到S3的Spark流会产生套接字超时

时间:2018-03-09 22:23:58

标签: scala apache-spark amazon-s3 proxy spark-streaming

我试图从我的本地运行Spark流媒体应用程序以连接到S3存储桶并且正在运行SocketTimeoutException。这是从桶中读取的代码:

val sc: SparkContext = createSparkContext(scName)
val hadoopConf=sc.hadoopConfiguration
hadoopConf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
val ssc = new StreamingContext(sc, Seconds(time))
val lines = ssc.textFileStream("s3a://foldername/subfolder/")
lines.print()

这是我得到的错误:

com.amazonaws.http.AmazonHttpClient executeHelper - Unable to execute HTTP request: connect timed out
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)

我认为这可能是由于代理所以我使用代理选项运行我的spark-submit如下:

    spark-submit --conf "spark.driver.extraJavaOptions=
-Dhttps.proxyHost=proxyserver.com -Dhttps.proxyPort=9000" 
--class application.jar s3module 5 5 SampleApp

那仍然给了我同样的错误。也许我没有正确设置代理?有没有办法在SparkContext的代码中设置它?

1 个答案:

答案 0 :(得分:0)

代理设置的具体选项covered in the docs

<property>
  <name>fs.s3a.proxy.host</name>
  <description>Hostname of the (optional) proxy server for S3 connections.</description>
</property>

<property>
  <name>fs.s3a.proxy.port</name>
  <description>Proxy server port. If this property is not set
    but fs.s3a.proxy.host is, port 80 or 443 is assumed (consistent with
    the value of fs.s3a.connection.ssl.enabled).</description>
</property>

可以使用spark.hadoop前缀

在spark默认值中设置
spark.hadoop.fs.s3a.proxy.host=myproxy
spark.hadoop.fs.s3a.proxy.port-8080