火花的环境副本有多少?

时间:2017-05-13 16:16:36

标签: python apache-spark pyspark distributed-computing bigdata

我有一个PySpark应用程序,必须详细说明5GB的压缩数据(字符串)。我正在使用一个12核(24线程)和72Gb RAM的小型服务器。我的PySpark程序只包含2个地图操作,由3个非常大的正则表达式(每个已编译3gb)和pickle加载。 Spark在独立模式下工作,worker和master在同一台机器上运行。

我的问题是:spark是否为每个执行器核心复制每个变量?因为它使用了所有可用的内存,然后使用了大量的交换空间。或者它可能加载RAM中的所有分区? RDD包含大约1000万个字符串,必须由3个正则表达式进行搜索。 RDD计数大约1000个分区。我很难完成这项任务,因为几分钟后内存已满并且火花开始使用交换空间变得非常慢。 我注意到没有正则表达式的情况是一样的。

这是我的代码,它会删除twitter推文的所有无用字段,并扫描推文的特定字词的文字和说明:

import json
import re
import twitter_util as twu
import pickle

from pyspark import SparkContext
sc = SparkContext()

prefix = '/home/lucadiliello'

source = prefix + '/data/tweets'
dest = prefix + '/data/complete_tweets'

#Regex's path
companies_names_regex = prefix + '/data/comp_names_regex'
companies_names_dict = prefix + '/data/comp_names_dict'
companies_names_dict_to_legal = prefix + '/data/comp_names_dict_to_legal'

#Loading the regex's
comp_regex = pickle.load(open(companies_names_regex))
comp_dict = pickle.load(open(companies_names_dict))
comp_dict_legal = pickle.load(open(companies_names_dict_to_legal))

#Loading the RDD from textfile 
tx = sc.textFile(source).map(lambda a: json.loads(a))


def get_device(input_text):
    output_text = re.sub('<[^>]*>', '', input_text)
    return output_text

def filter_data(a):
    res = {}
    try:
        res['mentions'] = a['entities']['user_mentions']
        res['hashtags'] = a['entities']['hashtags']
        res['created_at'] = a['created_at'] 
        res['id'] = a['id'] 

        res['lang'] = a['lang']
        if 'place' in a and a['place'] is not None:      
            res['place'] = {} 
            res['place']['country_code'] = a['place']['country_code'] 
            res['place']['place_type'] = a['place']['place_type'] 
            res['place']['name'] = a['place']['name'] 
            res['place']['full_name'] = a['place']['full_name']

        res['source'] = get_device(a['source'])
        res['text'] = a['text'] 
        res['timestamp_ms'] = a['timestamp_ms'] 

        res['user'] = {} 
        res['user']['created_at'] = a['user']['created_at'] 
        res['user']['description'] = a['user']['description'] 
        res['user']['followers_count'] = a['user']['followers_count'] 
        res['user']['friends_count'] = a['user']['friends_count']
        res['user']['screen_name'] = a['user']['screen_name']
        res['user']['lang'] = a['user']['lang']
        res['user']['name'] = a['user']['name']
        res['user']['location'] = a['user']['location']
        res['user']['statuses_count'] = a['user']['statuses_count']
        res['user']['verified'] = a['user']['verified']
        res['user']['url'] = a['user']['url']
    except KeyError:
        return []

    return [res]


results = tx.flatMap(filter_data)


def setting_tweet(tweet):

    text = tweet['text'] if tweet['text'] is not None else ''
    descr = tweet['user']['description'] if tweet['user']['description'] is not None else ''
    del tweet['text']
    del tweet['user']['description']

    tweet['text'] = {}
    tweet['user']['description'] = {}
    del tweet['mentions']

    #tweet
    tweet['text']['original_text'] = text
    tweet['text']['mentions'] = twu.find_retweet(text)
    tweet['text']['links'] = []
    for j in twu.find_links(text):
        tmp = {}
        try:
            tmp['host'] = twu.get_host(j)
            tmp['link'] = j
            tweet['text']['links'].append(tmp)
        except ValueError:
            pass

    tweet['text']['companies'] = []
    for x in comp_regex.findall(text.lower()):
        tmp = {}
        tmp['id'] = comp_dict[x.lower()]
        tmp['name'] = x
        tmp['legalName'] = comp_dict_legal[x.lower()]
        tweet['text']['companies'].append(tmp)

    # descr
    tweet['user']['description']['original_text'] = descr
    tweet['user']['description']['mentions'] = twu.find_retweet(descr)
    tweet['user']['description']['links'] = []
    for j in twu.find_links(descr):
        tmp = {}
        try:
            tmp['host'] = twu.get_host(j)
            tmp['link'] = j
            tweet['user']['description']['links'].append(tmp)
        except ValueError:
            pass

    tweet['user']['description']['companies'] = []
    for x in comp_regex.findall(descr.lower()):
        tmp = {}
        tmp['id'] = comp_dict[x.lower()]
        tmp['name'] = x
        tmp['legalName'] = comp_dict_legal[x.lower()]
        tweet['user']['description']['companies'].append(tmp)

    return tweet


res = results.map(setting_tweet)

res.map(lambda a: json.dumps(a)).saveAsTextFile(dest, compressionCodecClass="org.apache.hadoop.io.compress.BZip2Codec")

更新的 约1小时后,内存(72gb)完全充满并交换(72gb)。在我的情况下,使用广播不是解决方案。

更新2 如果不使用pickle加载3个变量,使用高达10gb的RAM而不是144GB就可以毫无问题地结束! (72gb RAM + 72Gb交换)

1 个答案:

答案 0 :(得分:1)

  

我的问题是:spark是否为每个执行器核心复制每个变量?

是!

每个(本地)变量的副本数等于您为Python工作者分配的线程数。

至于您的问题,请尝试在不使用comp_regex的情况下加载comp_dictcomp_dict_legalpickle