Zeppling twitter流媒体示例,无法

时间:2017-02-06 14:15:11

标签: apache-spark apache-spark-sql apache-zeppelin

当关注流媒体推文的zeppelin教程并使用SparkSQL查询它们时,遇到错误,找不到'tweets'临时表。使用的确切代码和链接如下所述

参考:https://zeppelin.apache.org/docs/0.6.2/quickstart/tutorial.html

import scala.collection.mutable.HashMap
import org.apache.spark.streaming._
import org.apache.spark.streaming.twitter._
import org.apache.spark.storage.StorageLevel
import scala.io.Source
import scala.collection.mutable.HashMap
import java.io.File
import org.apache.log4j.Logger
import org.apache.log4j.Level
import sys.process.stringSeqToProcess

/** Configures the Oauth Credentials for accessing Twitter */
def configureTwitterCredentials(apiKey: String, apiSecret: String, accessToken: String, accessTokenSecret: String) {
  val configs = new HashMap[String, String] ++= Seq(
    "apiKey" -> apiKey, "apiSecret" -> apiSecret, "accessToken" -> accessToken, "accessTokenSecret" -> accessTokenSecret)
  println("Configuring Twitter OAuth")
  configs.foreach{ case(key, value) =>
    if (value.trim.isEmpty) {
      throw new Exception("Error setting authentication - value for " + key + " not set")
    }
    val fullKey = "twitter4j.oauth." + key.replace("api", "consumer")
    System.setProperty(fullKey, value.trim)
    println("\tProperty " + fullKey + " set as [" + value.trim + "]")
  }
  println()
}

// Configure Twitter credentials
val apiKey = "xxx"
val apiSecret = "xxx"
val accessToken = "xx-xxx"
val accessTokenSecret = "xxx"

configureTwitterCredentials(apiKey, apiSecret, accessToken, accessTokenSecret)

import org.apache.spark.streaming._
import org.apache.spark.streaming.twitter._


@transient val ssc = new StreamingContext(sc, Seconds(2))
@transient val tweets = TwitterUtils.createStream(ssc, None)
@transient val twt = tweets.window(Seconds(60), Seconds(2))

val sqlContext= new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
case class Tweet(createdAt:Long, text:String)
twt.map(status=>
        Tweet(status.getCreatedAt().getTime()/1000, status.getText())).foreachRDD(rdd=>
        // Below line works only in spark 1.3.0.
        // For spark 1.1.x and spark 1.2.x,
        // use rdd.registerTempTable("tweets") instead.
        rdd.toDF().registerTempTable("tweets")
)

ssc.start()

在下一段中,我有SQL select语句

%sql select createdAt, count(1) from tweets group by createdAt order by createdAt

引发以下异常

org.apache.spark.sql.AnalysisException: Table not found: tweets;
    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:305)

1 个答案:

答案 0 :(得分:0)

能够通过进行以下编辑来运行上述示例。我不确定,如果由于Spark版本升级(v1.6.3)或其他一些潜在的架构细微差别需要进行此更改,我可能会丢失,但无论如何

参考:SparkSQL error Table Not Found

在第二个para中,不要直接调用SQL语法,请尝试使用sqlContext,如下所示

val my_df = sqlContext.sql("SELECT * from sweets LIMIT 5")
my_df.collect().foreach(println)