pandas_udf错误RuntimeError:来自pandas_udf的结果向量不是所需的长度:预期为12,得到35

时间:2019-11-28 05:04:19

标签: python apache-spark pyspark

使用以下代码,pandas_udf出现错误。代码是根据另一列创建数据类型的列。相同的代码对于正常的慢速udf(注释掉)可以正常工作。

基本上任何比“字符串” +数据更复杂的东西都会返回错误。

# from pyspark.sql.functions import udf
import pyspark.sql.types
from pyspark.sql.functions import pandas_udf, PandasUDFType

@pandas_udf(returnType=pyspark.sql.types.StringType(), functionType=PandasUDFType.SCALAR)
def my_transform (data) -> bytes:
    return_val = str(type(data))
    return return_val

rawdata_df = process_fails.toDF()

# decode_df = rawdata_df.withColumn('new_col', udf_decode(udf_unzip(udf_b64decode(udf_bytes(rawdata_df.rawData)))))
decode_df = rawdata_df.withColumn('new_col', my_transform(rawdata_df.rawData))

decode_df.show()

我收到以下错误:

An error occurred while calling o887.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 23.0 failed 4 times, most recent failure: Lost task 0.3 in stage 23.0 (TID 70, ip-10-213-56-185.ap-southeast-2.compute.internal, executor 10): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/mnt/yarn/usercache/livy/appcache/application_1574912148721_0001/container_1574912148721_0001_01_000020/pyspark.zip/pyspark/worker.py", line 377, in main
    process()
  File "/mnt/yarn/usercache/livy/appcache/application_1574912148721_0001/container_1574912148721_0001_01_000020/pyspark.zip/pyspark/worker.py", line 372, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/mnt/yarn/usercache/livy/appcache/application_1574912148721_0001/container_1574912148721_0001_01_000020/pyspark.zip/pyspark/serializers.py", line 286, in dump_stream
    for series in iterator:
  File "<string>", line 1, in <lambda>
  File "/mnt/yarn/usercache/livy/appcache/application_1574912148721_0001/container_1574912148721_0001_01_000020/pyspark.zip/pyspark/worker.py", line 101, in <lambda>
    return lambda *a: (verify_result_length(*a), arrow_return_type)
  File "/mnt/yarn/usercache/livy/appcache/application_1574912148721_0001/container_1574912148721_0001_01_000020/pyspark.zip/pyspark/worker.py", line 98, in verify_result_length
    "expected %d, got %d" % (len(a[0]), len(result)))
RuntimeError: Result vector from pandas_udf was not the required length: expected 12, got 35


这也会产生错误:

import pandas as pd
import numpy as np
from pyspark.sql.functions import pandas_udf, PandasUDFType, udf
df = pd.DataFrame({'x': ["1","2","3"], 'y':[1.0,2.0,3.0]})
sp_df = spark.createDataFrame(df)

@pandas_udf('long', PandasUDFType.SCALAR)
def pandas_plus_one(v):
    return len(v)

sp_df.withColumn('v2', pandas_plus_one(sp_df.x)).show()

错误消息是:

TypeError: Return type of the user-defined function should be Pandas.Series, but is <class 'int'>

1 个答案:

答案 0 :(得分:1)

类型为PandasUDFType.Scalar的

pandas_udf s个pd.Series,返回一个pd.Series。这就是为什么返回TypeError的原因-函数pandas_plus_one返回int而不是pd.Series在第二个值示例中,给定数据帧的列{ {1}}实际上是

x

如果您想要序列中每个项目的长度,则最容易映射它。函数定义(为清晰起见,带有类型提示)应该看起来更接近:

v = pd.Series(["1", "2", "3"])
print(v)

# 0    1
# 1    2
# 2    3
# dtype: object

您可以将相同的概念(使用@pandas_udf('long', PandasUDFType.SCALAR) def pandas_plus_one(v: pd.Series) -> pd.Series: return v.map(lambda x: len(x)) 以确保您的map返回相同长度的pandas_udf)应用于原始问题,这样就可以解决您的问题。