我有一个来自sql的数据框:
log = hc.sql("""select
, ip
, url
, ymd
from log """)
和适用的功能" ip"来自dataframe的值并返回三个值:
def get_loc(ip):
geodata = GeoLocator('SxGeoCity.dat', MODE_BATCH | MODE_MEMORY)
result = []
location = geodata.get_location(ip, detailed=True)
city_name_en = str(processValue(location['info']['city']['name_en']))
region_name_en = str(processValue(location['info']['region']['name_en']))
country_name_en = str(processValue(location['info']['country']['name_en']))
result = [city_name_en, region_name_en, country_name_en]
return result
我不知道如何将值传递给函数get_loc()并将返回值添加为map列" property"到现有的数据框架。使用python 2.7和PySpark。
答案 0 :(得分:0)
我不知道get_loc做了什么。
但您可以使用UDF,如下所示:
from pyspark.sql import functions as f
def get_loc(ip):
return str(ip).split('.')
rdd = spark.sparkContext.parallelize([(1, '192.168.0.1'), (2, '192.168.0.1')])
df = spark.createDataFrame(rdd, schema=['idx', 'ip'])
My_UDF = f.UserDefinedFunction(get_loc, returnType=ArrayType(StringType()))
df = df.withColumn('loc', My_UDF(df['ip']))
df.show()
# output:
+---+-----------+----------------+
|idx| ip| loc|
+---+-----------+----------------+
| 1|192.168.0.1|[192, 168, 0, 1]|
| 2|192.168.0.1|[192, 168, 0, 1]|
+---+-----------+----------------+