我定义以下代码以加载预先训练的嵌入模型:
import gensim
from gensim.models.fasttext import FastText as FT_gensim
import numpy as np
class Loader(object):
cache = {}
emb_dic = {}
count = 0
def __init__(self, filename):
print("|-------------------------------------|")
print ("Welcome to Loader class in python")
print("|-------------------------------------|")
self.fn = filename
@property
def fasttext(self):
if Loader.count == 1:
print("already loaded")
if self.fn not in Loader.cache:
Loader.cache[self.fn] = FT_gensim.load_fasttext_format(self.fn)
Loader.count = Loader.count + 1
return Loader.cache[self.fn]
def map(self, word):
if word not in self.fasttext:
Loader.emb_dic[word] = np.random.uniform(low = 0.0, high = 1.0, size = 300)
return Loader.emb_dic[word]
return self.fasttext[word]
我称此类为:
inputRaw = sc.textFile(inputFile, 3).map(lambda line: (line.split("\t")[0], line.split("\t")[1])).map(Loader(modelpath).map)
假设我有一个
rdd =(id, sentence) =[(id1, u'patina californian'), (id2, u'virgil american'), (id3', u'frensh'), (id4, u'american')]
我想总结每个句子的嵌入词向量:
def test(document):
print("document is = {}".format(document))
documentWords = document.split(" ")
features = np.zeros(300)
for word in documentWords:
features = np.add(features, Loader(modelpath).fasttext[word])
return features
def calltest(inputRawSource):
my_rdd = inputRawSource.map(lambda line: (line[0], test(line[1]))).cache()
return my_rdd
在这种情况下,模型路径文件将被加载多少次?请注意,我设置了spark.executor.instances" to 3
答案 0 :(得分:0)
默认情况下,分区数设置为Spark集群中所有执行者节点上的内核总数。假设您正在包含总共200个CPU内核的Spark集群(或超级计算执行程序)上处理10 GB,这意味着默认情况下,Spark可能使用200个分区来处理数据。
此外,要使每个CPU的每个执行器都能工作,可以在python中解决(将100%的内核与多处理模块一起使用)。