为什么在PySpark中创建此UDF的速度要比在Scala Spark中快得多?

时间:2019-04-05 12:31:47

标签: python scala apache-spark

我有Python脚本:

import time

from pyspark.sql.types import StringType
from pyspark.sql.functions import udf

from urllib.parse import urlsplit, unquote


def extractPath(host, url):
    if host in url:
        return urlsplit(url).path
    else:
        return '-'

startCreateUdfs = time.time()
getPathUdf = udf(extractPath, StringType())
endCreateUdfs = time.time()

print("Python udf creation time: {}".format(endCreateUdfs - startCreateUdfs))

和Scala脚本:

import java.net.URLDecoder
import java.nio.charset.StandardCharsets
import java.net.URL

import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.udf

object UdfTimes extends App{

  val spark = SparkSession.builder().master("local").getOrCreate()

  spark.sparkContext.setLogLevel("ERROR")

  val extractPath: (String, String) => String = (host, url) => {
    if (url.contains(host))
      new URL(url).getPath
    else
      "-"
  }
  val unquote: String => String = str => URLDecoder.decode(str, StandardCharsets.UTF_8.name())

  val startTimeUdf = System.nanoTime()
  val getPathUdf = udf(extractPath)
  val endTimeUdf = System.nanoTime()

  println("Scala udf registering time: " + (endTimeUdf - startTimeUdf) / math.pow(10, 9))
}

我写过做同样的事情。 udf的创建在Python中是即时的(从命令行):

Python udf creation time: 2.0503997802734375e-05

但是在Scala中,它几乎需要一秒钟(sbt命令行):

udf registering time: 0.768687091

这种巨大差异的原因是什么?

0 个答案:

没有答案