我在Spark上使用Python。我想过滤指定字段等于整个列表的行。
df.show()
+--------------------+---------------+
| _id| a1|
+--------------------+---------------+
|[596d799cbc6ec95d...|[1.0, 2.0, 3.0]|
|[596d79a2bc6ec95d...| [1.0, 2.0]|
+--------------------+---------------+
我希望结果是
+--------------------+---------------+
| _id| a1|
+--------------------+---------------+
|[596d79a2bc6ec95d...| [1.0, 2.0]|
+--------------------+---------------+
我用这个
df.filter(df.a1 == [1., 2.])
但它失败了。
回溯
File "/Users/gzc/Documents/spark-2.1.1-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o129.equalTo.
: java.lang.RuntimeException: Unsupported literal type class java.util.ArrayList [1, 2]
at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:75)
at org.apache.spark.sql.functions$.lit(functions.scala:101)
at org.apache.spark.sql.Column.$eq$eq$eq(Column.scala:267)
at org.apache.spark.sql.Column.equalTo(Column.scala:290)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
答案 0 :(得分:2)
您应该使用文字array
,如下所示:
from pyspark.sql.functions import array, lit
df = sc.parallelize([(1, [1., 2.]), (2, [1., 2., 3.])]).toDF(["id", "a1"])
df.where(df.a1 == array(*(lit(x) for x in [1., 2.]))).show()
+---+----------+
| id| a1|
+---+----------+
| 1|[1.0, 2.0]|
+---+----------+