我在pyspark工作并拥有以下代码,我正在处理推文并使用user_id和text制作RDD。以下是代码
"""
# Construct an RDD of (user_id, text) here.
"""
import json
def safe_parse(raw_json):
try:
json_object = json.loads(raw_json)
if 'created_at' in json_object:
return json_object
else:
return;
except ValueError as error:
return;
def get_usr_txt (line):
tmp = safe_parse (line)
return ((tmp.get('user').get('id_str'),tmp.get('text')));
usr_txt = text_file.map(lambda line: get_usr_txt(line))
print (usr_txt.take(5))
并且输出看起来没问题(如下所示)
[('470520068', "I'm voting 4 #BernieSanders bc he doesn't ride a CAPITALIST PIG adorned w/ #GoldmanSachs $. SYSTEM RIGGED CLASS WAR "), ('2176120173', "RT @TrumpNewMedia: .@realDonaldTrump #America get out & #VoteTrump if you don't #VoteTrump NOTHING will change it's that simple!\n#Trump htt…"), ('145087572', 'RT @Libertea2012: RT TODAY: #Colorado’s leading progressive voices to endorse @BernieSanders! #Denver 11AM - 1PM in MST CO State Capitol…'), ('23047147', '[VID] Liberal Tears Pour After Bernie Supporter Had To Deal With Trump Fans '), ('526506000', 'RT @justinamash: .@tedcruz is the only remaining candidate I trust to take on what he correctly calls the Washington Cartel. ')]
但是,我一做到
print (usr_txt.count())
我收到如下错误
Py4JJavaError Traceback (most recent call last)
<ipython-input-60-9dacaf2d41b5> in <module>()
8 usr_txt = text_file.map(lambda line: get_usr_txt(line))
9 #print (usr_txt.take(5))
---> 10 print (usr_txt.count())
11
/usr/local/spark/python/pyspark/rdd.py in count(self)
1054 3
1055 """
-> 1056 return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
1057
1058 def stats(self):
我错过了什么? RDD是否未正确创建?或者还有其他什么?我该如何解决?
答案 0 :(得分:0)
当解析的json行中没有created_at元素或解析时出错时,您已从None
方法返回safe_parse
。从(tmp.get('user').get('id_str'),tmp.get('text'))
中获取解析后的jsons中的元素时,这会产生错误。这导致错误发生
解决方法是在None
方法
get_usr_txt
def get_usr_txt (line):
tmp = safe_parse(line)
if(tmp != None):
return ((tmp.get('user').get('id_str'),tmp.get('text')));
现在问题是为什么print (usr_txt.take(5))
显示结果而print (usr_txt.count())
导致错误
多数民众赞成因为usr_txt.take(5)
只考虑了前五个rdds而不考虑其余的rdds并且不必处理None数据类型。