PySpark加速Ubuntu对Windows

时间:2017-01-16 17:02:22

标签: linux windows apache-spark pyspark

我有PySpark的示例作业,它是PageRank算法的一个版本。 代码如下:

from __future__ import print_function
from operator import add
import timeit
from pyspark.sql import SparkSession

# Normalize a list of pairs(url, rank) to 1
def normalize(ranks):
    norm = sum([rank for u, rank in ranks])
    ranks = [(u, rank / norm) for (u, rank) in ranks ]
    return sorted(ranks, key=lambda x: x[1], reverse=True)

def pagerank_2(edgeList, n, niter):
    # Loads all URLs from input file and initialize their neighbors.
    m = edgeList.groupByKey().cache()
    s = 0.85

    # Loads all URLs with other URL(s) link to from input file 
    # and initialize ranks of them to one.
    q = spark.sparkContext.range(n).map(lambda x: (x, 1.0)).cache()
    r = spark.sparkContext.range(n).map(lambda x: (x, 0.0)).cache()

    # Calculates and updates URL ranks continuously 
    # using PageRank algorithm.
    for iteration in range(niter):
        # Calculates URL contributions to the rank of other URLs.
        # Add URL ranks based on neighbor contributions.
        # Do not forget to add missing values in q and set to 0.0
        q = q.fullOuterJoin(m)\
             .flatMap(lambda x: (x[1][1] and [(u, x[1][0]/len(x[1][1])) for u in x[1][1]]) or [])\
             .reduceByKey(add)\
             .rightOuterJoin(r)\
             .mapValues(lambda x: (x[0] or 0)*s + (1-s))
        print("iteration = ", iteration)

    # Collects all URL ranks and dump them to console after normalization
    ranks = normalize(q.collect())
    print(ranks[0:10])


if __name__ == "__main__":

    spark = SparkSession\
            .builder\
            .master('local[*]')\
            .appName("SparkPageRank")\
            .config('spark.driver.allowMultipleContexts', 'true')\
            .config('spark.sql.warehouse.dir', 'file:///C:/Home/Org/BigData/python/BE4/') \
            .config('spark.sql.shuffle.partitions', '10')\
            .getOrCreate()

    spark.sparkContext.setLogLevel('WARN')

    g = [(0, 1), (0, 5), (1, 2), (1, 3), (2, 3),
         (2, 4), (2, 5), (3, 0), (5, 0), (5, 2)]
    n = 6
    edgeList = spark.sparkContext.parallelize(g)
    print(timeit.timeit('pagerank_2(edgeList, 6, 10)', number=1, globals=globals()))

节点编号从0到n-1。 edgeList参数是一个RDD,它包含一对节点(也就是边缘)列表。

我在本地模式下在Windows 10(Anaconda,Spark 2.1.0,winutils)上运行它。 这项工作分配为2896项任务,这些任务都非常轻。

我的问题是运行时间。 通过上面的例子:

  • Windows 10:> 40mn!
  • 适用于Linux的Windows子系统(Ubuntu 14.04):30秒

该电脑是笔记本电脑核心i7-4702HQ,16Gb内存,512Gb SSD。 在启动过程中,Windows比Linux慢,但速度慢50倍?肯定有办法减少这种差距吗?

我已为所有受到威胁的文件禁用了Windows Defender:java目录,python目录等。 关于要看什么的任何其他想法?

感谢任何线索。

1 个答案:

答案 0 :(得分:0)

也许关键是本地[*] ,这意味着

  

使用与您的逻辑核心一样多的工作线程在本地运行Spark   机。

尝试使用 local [10]