使用scala sbt解决kafka + flink示例?

时间:2016-07-03 19:33:14

标签: scala sbt apache-kafka apache-flink

kafka / flink / scala / sbt组合的新功能并尝试设置以下内容

  • 多主题Kafka Queue
  • 使用scala jar进行Flink流媒体作业
  • 一个scala jar,它从主题中读取数据,处理然后将数据推送到另一个主题

立即升级

  • 能够正确设置Kafka和Flink。
  • 能够使用flink二进制文件附带的Kafka.jar示例读取kafka队列。

能够创建一个wordcount jar(感谢ipoteka)
现在尝试创建一个流式字数统计jar,但遇到问题 现在尝试在尝试实际的kafka / spark流示例之前创建一个示例wordcount.jar。
但遇到simSBT问题 不知道我在俯瞰什么。
如果我有任何不必要的声明,请告诉我。
如果有人共享一个简单的程序来读/写kakfa队列,也会很感激。

项目设置 -

|- project/plugins.sbt
|- build.sbt
|- src/main/scala/WordCount.scala

build.sbt

name := "Kakfa-Flink Project"

version := "1.0"

libraryDependencies += "org.apache.spark" %% "spark-core" % "1.2.0"

// Updated : Correction pointed by ipoteka 
libraryDependencies += "org.apache.kafka" % "kafka_2.10" % "0.10.0.0"

libraryDependencies += "org.apache.flink" %% "flink-scala" % "1.0.0"

libraryDependencies += "org.apache.flink" %% "flink-clients" % "1.0.0"

libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" % "1.0.0"

// for jar building
mainClass in compile := Some("StreamWordCount")

plugins.sbt

// *** creating fat jar
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.1")

WordCount.scala

package prog

import org.apache.flink.api.scala._
import org.apache.flink.streaming.api.scala.DataStream
import org.apache.flink.streaming.api.windowing.time.Time

object WordCount {

  type WordCount = (String, Int)

  def main(lines: DataStream[String], stopWords: Set[String], window: Time): DataStream[WordCount] = {
    lines
      .flatMap(line => line.split(" "))
      .filter(word => !word.isEmpty)
      .map(word => word.toLowerCase)
      .filter(word => !stopWords.contains(word))
      .map(word => (word, 1))
      .keyBy(0)
      .timeWindow(window)
      .sum(1)
  }

}

StreamWordCount.scala

package prog

import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer082
import org.apache.flink.streaming.util.serialization.SimpleStringSchema

import org.apache.flink.api.scala._
import org.apache.flink.streaming.api.scala.DataStream
import org.apache.flink.streaming.api.windowing.time.Time



object Main {
  def main(args: Array[String]) {

  type WordCount = (String, Int)

    val env = StreamExecutionEnvironment.getExecutionEnvironment
    val properties = new Properties()
    properties.setProperty("bootstrap.servers", "localhost:9092")
    properties.setProperty("zookeeper.connect", "localhost:2181")
    properties.setProperty("group.id", "test")
    val stream = env
      .addSource(new FlinkKafkaConsumer082[String]("topic", new SimpleStringSchema(), properties))
      .flatMap(line => line.split(" "))
      .filter(word => !word.isEmpty)
      .map(word => word.toLowerCase)
      .filter(word => !stopWords.contains(word))
      .map(word => (word, 1))
      .keyBy(0)
      .timeWindow(window)
      .sum(1)
      .print

    env.execute("Flink Kafka Example")
  }
}

创建jar时出错(已更新)

[vagrant@streaming ex]$ /opt/sbt/bin/sbt  package
    [error] /home/vagrant/ex/src/main/scala/StreamWordCount.scala:4: object connectors is not a member of package org.apache.flink.streaming
[error] import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer082
[error]                                   ^
[error] /home/vagrant/ex/src/main/scala/StreamWordCount.scala:18: not found: type Properties
[error]     val properties = new Properties()
[error]                          ^
[error] /home/vagrant/ex/src/main/scala/StreamWordCount.scala:23: not found: type FlinkKafkaConsumer082
[error]       .addSource(new FlinkKafkaConsumer082[String]("topic", new SimpleStringSchema(), properties))
[error]                      ^
[error] three errors found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 31 s, completed Jul 3, 2016 9:02:18 PM

1 个答案:

答案 0 :(得分:2)

你在哪里获得这些版本?我没有看到kafka发布1.0.0。查看maven(按sbt标签):

libraryDependencies += "org.apache.kafka" % "kafka_2.10" % "0.10.0.0"

我还建议您检查所有其他版本。例如,Spark当前版本为1.6.2