使用Hadoop配置对象的Spark Streaming

时间:2015-03-18 14:51:50

标签: scala apache-spark

StreamingContext - fileStream被重载以获取Hadoop配置对象,但它似乎无法正常工作。

来自Spark源代码的

代码片段:

https://github.com/apache/spark/blob/master/streaming/src/main/scala/org/apache/spark/streaming/StreamingContext.scala

def fileStream[K: ClassTag,V: ClassTag,F <: NewInputFormat[K, V]: ClassTag] (directory: String): InputDStream[(K, V)] = 
{ new FileInputDStream[K, V, F](this, directory) }

def fileStream[K: ClassTag,V: ClassTag,F <: NewInputFormat[K, V]: ClassTag] (directory: String, filter: Path => Boolean, newFilesOnly: Boolean): InputDStream[(K, V)] = 
{ new FileInputDStream[K, V, F](this, directory, filter, newFilesOnly) }

def fileStream[K: ClassTag,V: ClassTag,F <: NewInputFormat[K, V]: ClassTag] (directory: String,filter: Path => Boolean, newFilesOnly: Boolean, conf: Configuration): InputDStream[(K, V)] = 
{ new FileInputDStream[K, V, F](this, directory, filter, newFilesOnly, Option(conf)) }

代码段: 工作正常

val windowDStream = ssc.fileStream[LongWritable, Text, TextInputFormat](args(0), (x: Path) => true, true);

编译错误:

val conf = sc.hadoopConfiguration;
    val windowDStream = ssc.fileStream[LongWritable, Text, TextInputFormat](args(0), (x: Path) => true, true,conf);

错误:

overloaded method value fileStream with alternatives: (directory: String,filter: org.apache.hadoop.fs.Path ⇒ Boolean,newFilesOnly: Boolean)(implicit evidence$9: scala.reflect.ClassTag[org.apache.hadoop.io.LongWritable], implicit evidence$10: scala.reflect.ClassTag[org.apache.hadoop.io.Text], implicit evidence$11: scala.reflect.ClassTag[org.apache.hadoop.mapreduce.lib.input.TextInputFormat])org.apache.spark.streaming.dstream.InputDStream[(org.apache.hadoop.io.LongWritable, org.apache.hadoop.io.Text)] <and> (directory: String)(implicit evidence$6: scala.reflect.ClassTag[org.apache.hadoop.io.LongWritable], implicit evidence$7: scala.reflect.ClassTag[org.apache.hadoop.io.Text], implicit evidence$8: scala.reflect.ClassTag[org.apache.hadoop.mapreduce.lib.input.TextInputFormat])org.apache.spark.streaming.dstream.InputDStream[(org.apache.hadoop.io.LongWritable, org.apache.hadoop.io.Text)] cannot be applied to (String, org.apache.hadoop.fs.Path ⇒ Boolean, Boolean, org.apache.hadoop.conf.Configuration)

1 个答案:

答案 0 :(得分:0)

我假设您使用的是Spark 1.2或更早版本。如果从master更改为1.2分支,您将看到此重载不存在。事实上,FileInputDStream本身并不接受这个作为构造函数参数,直到1.3