SparkStreaming& Kafka:value reduceByKey不是org.apache.spark.streaming.dstream.DStream [Any]的成员

时间:2018-01-14 06:11:27

标签: scala apache-spark apache-kafka spark-streaming

我尝试使用Kafka Consumer和SparkStreaming在DStream上进行ETL,但是出现了以下错误。你能帮我解决这个问题吗?感谢。

KafkaCardCount.scala:56:28: value reduceByKey is not a member of org.apache.spark.streaming.dstream.DStream[Any]
[error]       val wordCounts = etl.reduceByKey(_ + _)
[error]                            ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 7 s, completed Jan 14, 2018 2:52:23 PM

我有这个示例代码。我发现很多文章建议添加import import org.apache.spark.streaming.StreamingContext._,但似乎它对我不起作用。

package example

import org.apache.spark.streaming.StreamingContext._
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.{Durations, StreamingContext}

val ssc = new StreamingContext(sparkConf, Durations.seconds(5))

val stream = KafkaUtils.createDirectStream[String, String](
    ssc,
    PreferConsistent,
    Subscribe[String, String](topics, kafkaParams)
)

val etl = stream.map(r => {
    val split = r.value.split("\t")
    val id = split(1)
    val numStr = split(4)
    if (numStr.matches("\\d+")) {
        val num = numStr.toInt
        val tpl = (id, num)
        tpl
    } else {
        ()
    }
})

// Create the counts per game
val wordCounts = etl.reduceByKey(_ + _)

wordCounts.print()

我有这个build.sbt。

lazy val root = (project in file(".")).
  settings(
    inThisBuild(List(
      organization := "example",
      scalaVersion := "2.11.8",
      version      := "0.1.0-SNAPSHOT"
    )),
    name := "KafkaCardCount",
    libraryDependencies ++= Seq (
      "org.apache.spark" %% "spark-core" % "2.1.0",
      "org.apache.spark" % "spark-streaming_2.11" % "2.1.0",
      "org.apache.spark" %% "spark-streaming-kafka-0-10-assembly" % "2.1.0"
    )
  )

assemblyMergeStrategy in assembly := {
  case PathList("META-INF", xs @ _*) => MergeStrategy.discard
  case x => MergeStrategy.first
}

1 个答案:

答案 0 :(得分:2)

你的问题在这里:

else {
    ()
}

(String, Int)Unit的常见超类型为Any

您需要做的是通过类似于success(if)子句的类型表示处理失败。例如:

else ("-1", -1)
 .filter { case (id, res) => id != "-1" && res != -1 }
 .reduceByKey(_ + _)