将DF列转换为列表时出现PySpark错误

时间:2018-10-30 09:20:36

标签: pyspark pyspark-sql

我的Spark脚本有问题。

我有数据框2,它是单列数据框。我要实现的是,仅从用户位于列表中的df1返回结果。

我已经尝试过以下方法,但会收到错误消息(也在下面)

有人可以建议吗?

    listx= df2.select('user2').collect()

    df_agg = df1\
        .coalesce(1000)\
        .filter((df1.dt == 20181029) &(df1.user.isin(listx)))\
        .select('list of fields')

Traceback (most recent call last):
  File "/home/keenek1/indev/rax.py", line 31, in <module>
    .filter((df1.dt == 20181029) &(df1.imsi.isin(listx)))\
  File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/column.py", line 444, in isin
  File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/column.py", line 36, in _create_column_from_literal
  File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
  File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
  File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.sql.functions.lit.
: java.lang.RuntimeException: Unsupported literal type class java.util.ArrayList [234101953127315]
        at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:77)
        at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:163)
        at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:163)
        at scala.util.Try.getOrElse(Try.scala:79)
        at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:162)
        at org.apache.spark.sql.functions$.typedLit(functions.scala:113)
        at org.apache.spark.sql.functions$.lit(functions.scala:96)
        at org.apache.spark.sql.functions.lit(functions.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

1 个答案:

答案 0 :(得分:2)

不确定这是最佳答案,但是:

# two single column dfs to try replicate your example:
df1 = spark.createDataFrame([{'a': 10}])
df2 = spark.createDataFrame([{'a': 10}, {'a': 18}])
l1 = df1.select('a').collect()
# l1 = [Row(a=10)]  - this is not an accepted value for the isin as it seems:
df2.select('*').where(df2.a.isin(l_x)).show()  # this will throw and error
df2.select('*').where(df2.a.isin([10])).show()  # this will NOT throw and error

类似这样:

l2 = [item.a for item in l1]
# l2 = [10]
df2.where(F.col('a').isin(l2)).show()

(说实话,这有点奇怪,但是...有一张支持isin with single column dataframes的票证)

希望这会有所帮助,祝您好运!

编辑:这是所收集列表的一小部分:) 您的示例为:

listx= [item.user2 for item in df2.select('user2').collect()]
df_agg = df1\
    .coalesce(1000)\
    .filter((df1.dt == 20181029) &(df1.user.isin(listx)))\
    .select('list of fields')