我有json rdd,因为我使用pprint打印,如下所示
[u'{']
[u'"hash" ', u' "0000000000000000059134ebb840559241e8e2799f3ebdff56723efecfd6567a",']
[u'"confirmations" ', u' 969,']
[u'"size" ', u' 52543,']
[u'"height" ', u' 395545,']
[u'"version" ', u' 4,']
[u'"merkleroot" ', u' "8cf3eea32f692e5ebc9c25bb912ab3aff43c02761609d52cdd48afc5a05918fb",']
[u'"tx" ', u' [']
[u'"b3df3d5fedadd07a46753af556c336c41e038a9aec7ddd9921ad249828fd6d66",']
[u'"4ada431255d104c1c76ef56bdef4186ea89793223133e535383ff39d5a322910",']
我想提取第二个最后一个值[u'"b3df3d5fedadd07a46753af556c336c41e038a9aec7ddd9921ad249828fd6d66",']
如何在索引不起作用时获取此值。代码在
之下from pyspark.streaming import StreamingContext
import json
# Create a local StreamingContext with two working thread and batch interval of 1 second
sc = SparkContext("local[2]", "txcount")
ssc = StreamingContext(sc, 1)
lines = ssc.socketTextStream("localhost", 9999)
dump_rdd = lines.map(lambda x: json.dumps(x))
load_rdd = dump_rdd.map(lambda x: json.loads(x))
tx = load_rdd.map(lambda x: x.split(":"))
tx.pprint()
答案 0 :(得分:1)
socketTextStream
不是为处理多行记录而设计的。虽然重新组装完整的记录并非不可能,但我怀疑这是值得的。如果您想使用socketTextStream
来简化,只需编码(例如使用Base64编码)或在传递给Spark之前清理数据上游。