我有dns(string)和ip-address(string)的数据帧。我想使用UDF来应用我创建的python函数,该函数搜索不同的/唯一的dns,并将其与其匹配的ip数量相关联。最终它将信息输出到列表中。最终结果是UDF取得一个数据帧并返回一个列表。
#creating sample data
from pyspark.sql import Row
l = [('pipe.skype.com','172.25.132.26'),('management.azure.com','172.25.24.57'),('pipe.skype.com','172.11.128.10'),('management.azure.com','172.16.12.22'),('www.google.com','172.26.51.144'),('collector.exceptionless.io','172.22.2.21')]
rdd = sc.parallelize(l)
data = rdd.map(lambda x: Row(dns_host=x[0], src_ipv4=x[1]))
data_df = sqlContext.createDataFrame(data)
def beaconing_aggreagte(df):
"""Loops through unique hostnames and correlates them to unique src ip. If an individual hostname has less than 5 unique source ip connection, moves to the next step"""
dns_host = df.select("dns_host").distinct().rdd.flatMap(lambda x: x).collect()
HIT_THRESHOLD = 5
data = []
for dns in dns_host:
dns_data =[]
testing = df.where((f.col("dns_host") == dns)).select("src_ipv4").distinct().rdd.flatMap(lambda x: x).collect()
if 0 < len(testing) <= 5: #must have less than 5 unique src ip for significance
dns_data.append(dns)
data.append([testing,dns_data])
print([testing,dns_data])
return data
我认为我的模式可能不正确
#Expected return from function: [[['172.25.24.57','172.16.12.22'],[management.azure.com]],..]
array_schema = StructType([
StructField('ip', ArrayType(StringType()), nullable=False),
StructField('hostname', ArrayType(StringType()), nullable=False)
])
testing_udf_beaconing_aggreagte = udf(lambda z: beaconing_aggreagte(z), array_schema)
df_testing = testing_df.select('*',testing_udf_beaconing_aggreagte(array('dns_host','src_ipv4')))
df_testing.show()
此错误显示为:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1248.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1248.0 (TID 3846823, 10.139.64.23, executor 13): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
我的最终目标是获取df并以[[[ips的列表],[dns_host]],...]的格式返回列表。我正在尝试使用UDF来帮助在集群上并行化操作,而不是使用一个执行程序。
答案 0 :(得分:1)
一个小组成员应该能够实现这一目标。使用聚合收集所有IP,然后计算列表的大小。 然后,您可以过滤出大小大于5的行
from pyspark.sql.functions import *
from pyspark.sql import Row
l = [('pipe.skype.com','172.25.132.26'),('management.azure.com','172.25.24.57'),('pipe.skype.com','172.11.128.10'),('management.azure.com','172.16.12.22'),('www.google.com','172.26.51.144'),('collector.exceptionless.io','172.22.2.21')]
rdd = sc.parallelize(l)
data = rdd.map(lambda x: Row(dns_host=x[0], src_ipv4=x[1]))
data_df = sqlContext.createDataFrame(data)
data_df2 = data_df.groupby("dns_host").agg(F.collect_list("src_ipv4").alias("src_ipv4_list"))\
.withColumn("ip_count",F.size("src_ipv4_list"))\
.filter(F.col("ip_count") <= 5)\
.drop("ip_count")
data_df2.show(20,False)
输出:
+--------------------------+------------------------------+
|dns_host |src_ipv4_list |
+--------------------------+------------------------------+
|pipe.skype.com |[172.25.132.26, 172.11.128.10]|
|collector.exceptionless.io|[172.22.2.21] |
|www.google.com |[172.26.51.144] |
|management.azure.com |[172.25.24.57, 172.16.12.22] |
+--------------------------+------------------------------+