值尾部不是(String,String)的成员

时间:2017-10-03 11:24:15

标签: scala hadoop apache-spark apache-kafka

我正在使用Spark-shell。我已经在Kafka主题中存储了推文,以使用Spark-shell执行情绪分析。

我添加了依赖项: org.apache.spark:火花流-kafka_2.10:1.6.2 edu.stanford.nlp:斯坦福-corenlp:3.5.1

这些是我正在使用的代码:

import org.apache.spark._
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.Seconds 
import org.apache.spark.streaming.kafka._
val conf = new SparkConf().setMaster("local[4]").setAppName("KafkaReceiver")
val ssc = new StreamingContext(conf, Seconds(5))
val kafkaStream = KafkaUtils.createStream(ssc, "sandbox.hortonworks.com:2181","test-consumer-group", Map("test12" -> 5))
val topCounts60 = kafkaStream.map((_, 1)).reduceByKeyAndWindow(_ + _, Seconds(60)).map { case (topic, count) => (count, topic) }.transform(_.sortByKey(false))
  topCounts60.foreachRDD(rdd => {
      val topList = rdd.take(10)
      println("\nPopular topics in last 60 seconds (%s total):".format(rdd.count()))
      topList.foreach { case (count, tag) => println("%s (%s tweets)".format(tag, count)) }
    })
kafkaStream.count().map(cnt => "Received " + cnt + " kafka messages.").print()
val wordSentimentFilePath = "hdfs://sandbox.hortonworks.com:8020/TwitterData/AFINN.txt"
    val wordSentiments = ssc.sparkContext.textFile(wordSentimentFilePath).map { line =>
    val Array(word, happiness) = line.split("\t")
    (word, happiness)
    } cache()
val happiest60 = kafkaStream.map(hashTag => (hashTag.tail, 1)).reduceByKeyAndWindow(_ + _, Seconds(60)). transform{topicCount => wordSentiments.join(topicCount)}
                .map{case (topic, tuple) => (topic, tuple._1 * tuple._2)}.map{case (topic, happinessValue) => (happinessValue, topic)}.transform(_.sortByKey(false))
ssc.start()
ssc.stop()

但在执行这些行时,

val happiest60 = kafkaStream.map(hashTag => (hashTag.tail,1)).reduceByKeyAndWindow(_ + _, Seconds(60)). transform{topicCount => wordSentiments.join(topicCount)}.map{case (topic, tuple) => (topic, tuple._1 * tuple._2)}.map{case (topic, happinessValue) => (happinessValue, topic)}.transform(_.sortByKey(false))

它抛出错误:

  

错误:value tail不是(String,String)的成员

1 个答案:

答案 0 :(得分:0)

hashTag的类型可能是(String, String),因此未定义尾部操作。 tail是在集合上定义的函数,而不是在元组上定义的函数。

map操作对从流中接收的单个项目进行操作。如果kafka流包含类型为(String, String)的项目,则表示正常。