Cloudera火花,RDD是空的

时间:2017-04-28 08:33:12

标签: hive pyspark cloudera

我正在尝试使用cloudera vm上的pyspark和hive创建数据框,但每次我都会收到此错误。

追踪(最近一次通话):   文件“/home/cloudera/Desktop/TwitterSentimentAnalysis/SentimentAnalysis.py”,第98行,     .reduceByKey(lambda a,b:a + b)\   文件“/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/context.py”,第62行,在toDF中   在createDataFrame中输入文件“/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/context.py”,第404行   在_createFromRDD中输入文件“/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/context.py”,第285行   在_inferSchema中输入文件“/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/context.py”,第229行   首先提交“/usr/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py”,第1320行 ValueError:RDD为空

INFO spark.SparkContext:从关机挂钩调用stop()

我该怎么做才能解决此错误。

编辑2 -     sc = SparkContext(appName =“PythonSentimentAnalysis”)     sqlCtx = HiveContext(sc)

filenameAFINN = "/home/cloudera/Desktop/TwitterSentimentAnalysis/AFINN/AFINN-111.txt"

 afinn = dict(map(lambda (w, s): (w, int(s)), [ ws.strip().split('\t') for ws in open(filenameAFINN) ]))

filenameCandidate = "file:///home/cloudera/Desktop/TwitterSentimentAnalysis/Candidates/Candidate Mapping.txt"

candidates = sc.textFile(filenameCandidate).map(lambda x: (x.strip().split(",")[0],x.strip().split(","))) \
                   .flatMapValues(lambda x:x).map(lambda y: (y[1],y[0])).distinct()


pattern_split = re.compile(r"\W+")

tweets = sqlCtx.sql("select id, text, entities.user_mentions.name from incremental_tweets")

def sentiment(text):
  words = pattern_split.split(text.lower())
  sentiments = map(lambda word: afinn.get(word, 0), words)
  if sentiments:
   sentiment = float(sum(sentiments))/math.sqrt(len(sentiments))
  else:
   sentiment = 0
   return sentiment

   sentimentTuple = tweets.rdd.map(lambda r: [r.id, r.text, r.name]) \
           .map(lambda r: [sentiment(r[1]),r[2]]) \
           .flatMapValues(lambda x: x) \
           .map(lambda y: (y[1],y[0])) \
           .reduceByKey(lambda x, y: x+y) \
           .sortByKey(ascending=True)

  scoreDF = sentimentTuple.join(candidates) \
        .map(lambda (x,y): (y[1],y[0])) \
        .reduceByKey(lambda a,b: a+b) \
        .toDF()

   scoreRenameDF =  scoreDF.withColumnRenamed("_1","Candidate").withColumnRenamed("_2","Score")

   sqlCtx.registerDataFrameAsTable(scoreRenameDF, "SCORE_TEMP")

   sqlCtx.sql("INSERT OVERWRITE TABLE candidate_score SELECT Candidate, Score FROM SCORE_TEMP")

1 个答案:

答案 0 :(得分:0)

如果正确创建了以下代码,请尝试检查中间RDD:

for i in rdd.take(10):   print(i)

这将显示您的RDD的前10个条目