如何从hazelcast地图创建pyspark rdd

时间:2018-01-04 11:42:32

标签: python apache-spark pyspark hazelcast hazelcast-imap

我正在编写spark mlib,我需要从hazelcast地图中读取数据,并希望从hazelcast地图创建RDD /数据集/数据框。

所以我在hazelcast地图中以密钥和值的形式提供数据。 我想用它创建pyspark对rdd,我不知道该怎么做。

这是我在hazelcast地图中以键值对形式提供的数据 -

key: '8e5d78d2-8feb-41cd-9e1a-166fbe11c569' 
value: '2,-0.425965884412454,0.960523044882985'
key: 'dfea4b0a-c6f8-4e14-8543-edc53a9d9e07'
value: '2,-1.15823309349523,0.877736754848451'

这是我的pyspark mlib程序,我正在阅读hazelcast地图。

from pyspark.sql import SparkSession
from pyspark.sql.types import StringType
from pyspark.sql.types import *

import hazelcast, logging
from pyspark.rdd import RDD
config = hazelcast.ClientConfig()
config.network_config.addresses.append('localhost:5701')    
logging.basicConfig()
logging.getLogger().setLevel(logging.INFO)
client = hazelcast.HazelcastClient(config)

############################################

from pyspark import SparkContext, rdd
spark = SparkContext()

# Here my_map is hazelcast map how do I create RDD/dataframe/dataset from this map.
my_map = client.get_map("fraudinputs").blocking()

for key, value in my_map.items:
    print "key:", key, "value:", value

client.shutdown()

1 个答案:

答案 0 :(得分:0)

首先要做的是使用sparkContext.textFile

读取文件
rdd = sc.textFile("path to the data file")

然后将rdd分为键rdd 值rdd

keys = rdd.filter(lambda x: x.find("key") != -1).map(lambda x: x.split("=")[1])
values = rdd.filter(lambda x: x.find("value") != -1).map(lambda x: x.split("=")[1])

最后一步是将它们组合起来使用zip

进行配对
paired = keys.zip(values)

你应该有以下输出

(u" '8e5d78d2-8feb-41cd-9e1a-166fbe11c569' ", u" '2,-0.425965884412454,0.960523044882985'")
(u" 'dfea4b0a-c6f8-4e14-8543-edc53a9d9e07'", u" '2,-1.15823309349523,0.877736754848451'")