将对象传递给pyspark中的UDF

时间:2019-11-01 23:51:03

标签: python pyspark databricks

我需要在Spark DataFrame中的列的每个单元格上应用一种方法。我正在使用数据库来查找单元格的值。我正在使用将数据库作为输入的UDF,如下所示,但它不起作用并返回错误。

from pyspark.sql.functions import udf, col
import random


asndb = pyasn.pyasn('/dbfs/mnt/geoip/ipasn.db')

def asn_mapper(ip, asndb):
  try:
    ret = asndb.lookup(ip)
    ret = ret[0]
    if ret == None:
      return '0'
    else: return str(ret)
  except:
    return '0'

def make_asn(asndb):
     return udf(lambda c: asn_mapper(c, asndb))

b= sqlContext.createDataFrame([("A", '22.33.44.55'), ("B", '11.22.11.44'), ("D", '44.32.11.44')],["Letter", "ip"])

b.withColumn("asn", make_asn(asndb)(col("ip"))).show()




/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o1094.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 15.0 failed 4 times, most recent failure: Lost task 0.3 in stage 15.0 (TID 276, 10.65.251.77, executor 1): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/databricks/spark/python/pyspark/worker.py", line 394, in main
    func, profiler, deserializer, serializer = read_udfs(pickleSer, infile, eval_type)
  File "/databricks/spark/python/pyspark/worker.py", line 246, in read_udfs
    arg_offsets, udf = read_single_udf(pickleSer, infile, eval_type, runner_conf)
  File "/databricks/spark/python/pyspark/worker.py", line 160, in read_single_udf
    f, return_type = read_command(pickleSer, infile)
  File "/databricks/spark/python/pyspark/worker.py", line 71, in read_command
    command = serializer.loads(command.value)
  File "/databricks/spark/python/pyspark/serializers.py", line 672, in loads
    return pickle.loads(obj)
UnpicklingError: state is not a dictionary

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:496)
    at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:81)

但是,如果我将数据库放在UDF内,它将起作用。以下代码有效。我不想在UDF中添加pyasn.pyasn('/dbfs/mnt/geoip/ipasn.db'),这使它非常慢。

def asn_mapper(ip):
  asndb = pyasn.pyasn('/dbfs/mnt/geoip/ipasn.db')
  try:
    ret = asndb.lookup(ip)
    ret = ret[0]
    if ret == None:
      return '0'
    else: return str(ret)
  except:
    return '0'

def make_asn():
     return udf(lambda c: asn_mapper(c, ))

b= sqlContext.createDataFrame([("A", '22.33.44.55'), ("B", '11.22.11.44'), ("D", '44.32.11.44')],["Letter", "ip"])

b.withColumn("asn", make_asn()(col("ip"))).show()

有什么方法可以使第一个代码运行?

1 个答案:

答案 0 :(得分:0)

在我看来,您似乎正在尝试从geodb查找ip,请在两个表上使用join。在ip列上。这应该可以解决问题。