如何使用模式匹配从pyspark数据框中删除行?

时间:2019-05-12 20:59:05

标签: pyspark

我有一个从CSV文件读取的pyspark数据帧,该文件的value列包含十六进制值。

| date     | part  | feature | value"       |
|----------|-------|---------|--------------|
| 20190503 | par1  | feat2   | 0x0          |
| 20190503 | par1  | feat3   | 0x01         |
| 20190501 | par2  | feat4   | 0x0f32       |
| 20190501 | par5  | feat9   | 0x00         |
| 20190506 | par8  | feat2   | 0x00f45      |
| 20190507 | par1  | feat6   | 0x0e62300000 |
| 20190501 | par11 | feat3   | 0x000000000  |
| 20190501 | par21 | feat5   | 0x03efff     |
| 20190501 | par3  | feat9   | 0x000        |
| 20190501 | par6  | feat5   | 0x000000     |
| 20190506 | par5  | feat8   | 0x034edc45   |
| 20190506 | par8  | feat1   | 0x00000      |
| 20190508 | par3  | feat6   | 0x00000000   |
| 20190503 | par4  | feat3   | 0x0c0deffe21 |
| 20190503 | par6  | feat4   | 0x0000000000 |
| 20190501 | par3  | feat6   | 0x0123fe     |
| 20190501 | par7  | feat4   | 0x00000d0    |

要求是删除包含类似于0x0、0x00、0x000等的值的行,这些行的值在值列中为十进制0(零)。 '0x'之后的0数在整个数据帧中变化。我尝试过通过模式匹配进行删除,但是没有成功。

myFile = sc.textFile("file.txt")
header = myFile.first()

fields = [StructField(field_name, StringType(), True) for field_name in header.split(',')]

myFile_header = myFile.filter(lambda l: "date" in l)
myFile_NoHeader = myFile.subtract(myFile_header)

myFile_df = myFile_NoHeader.map(lambda line: line.split(",")).toDF(schema)

## this is the pattern match I tried 
result = myFile_df.withColumn('Test', regexp_extract(col('value'), '(0x)(0\1*\1*)',2 ))
result.show()

我使用的另一种方法是使用udf:

def convert_value(x):
    return int(x,16)

在pyspark中使用此udf给我

  

ValueError:以16为基数的int()的无效文字

1 个答案:

答案 0 :(得分:2)

我不太了解您的正则表达式,但是当您想匹配所有包含0x0(+任意数量的零)的字符串时,可以使用^0x0+$。使用rlike可以实现使用正则表达式进行过滤,而波浪号会否定匹配项。

l = [('20190503', 'par1', 'feat2', '0x0'),
('20190503', 'par1', 'feat3', '0x01'),
('20190501', 'par2', 'feat4', '0x0f32'),
('20190501', 'par5', 'feat9', '0x00'),
('20190506', 'par8', 'feat2', '0x00f45'),
('20190507', 'par1', 'feat6', '0x0e62300000'),
('20190501', 'par11', 'feat3', '0x000000000'),
('20190501', 'par21', 'feat5', '0x03efff'),
('20190501', 'par3', 'feat9', '0x000'),
('20190501', 'par6', 'feat5', '0x000000'),
('20190506', 'par5', 'feat8', '0x034edc45'),
('20190506', 'par8', 'feat1', '0x00000'),
('20190508', 'par3', 'feat6', '0x00000000'),
('20190503', 'par4', 'feat3', '0x0c0deffe21'),
('20190503', 'par6', 'feat4', '0x0000000000'),
('20190501', 'par3', 'feat6', '0x0123fe'),
('20190501', 'par7', 'feat4', '0x00000d0')]

columns = ['date', 'part', 'feature', 'value']

df=spark.createDataFrame(l, columns)

expr = "^0x0+$"
df.filter(~ df["value"].rlike(expr)).show()

输出:

+--------+-----+-------+------------+ 
|    date| part|feature|       value| 
+--------+-----+-------+------------+ 
|20190503| par1|  feat3|        0x01| 
|20190501| par2|  feat4|      0x0f32| 
|20190506| par8|  feat2|     0x00f45| 
|20190507| par1|  feat6|0x0e62300000| 
|20190501|par21|  feat5|    0x03efff| 
|20190506| par5|  feat8|  0x034edc45| 
|20190503| par4|  feat3|0x0c0deffe21| 
|20190501| par3|  feat6|    0x0123fe| 
|20190501| par7|  feat4|   0x00000d0| 
+--------+-----+-------+------------+