Pyspark RDD .filter() with wildcard

时间:2016-08-31 18:23:31

标签: python apache-spark rdd

I have an Pyspark RDD with a text column that I want to use as a a filter, so I have the following code:

table2 = table1.filter(lambda x: x[12] == "*TEXT*")

To problem is... As you see I'm using the * to try to tell him to interpret that as a wildcard, but no success. Anyone has a help no that ?

1 个答案:

答案 0 :(得分:10)

The lambda function is pure python, so something like below would work

table2 = table1.filter(lambda x: "TEXT" in x[12])