{
"city": "Tempe",
"state": "AZ",
...
"attributes": [
"BikeParking: True",
"BusinessAcceptsBitcoin: False",
"BusinessAcceptsCreditCards: True",
"BusinessParking: {'garage': False, 'street': False, 'validated': False, 'lot': True, 'valet': False}",
"DogsAllowed: False",
"RestaurantsPriceRange2: 2",
"WheelchairAccessible: True"
],
...
}
你好,我正在使用PySpark,我正在尝试输出(state,BusinessAcceptsBitcoin)的元组,目前我在做:
csr = (dataset
.filter(lambda e:"city" in e and "BusinessAcceptsBitcoin" in e)
.map(lambda e: (e["city"],e["BusinessAcceptsBitcoin"]))
.collect()
)
但是这个命令失败了。如何获得“BusinessAcceptsBitcoin”和“city”字段?
答案 0 :(得分:1)
您可以使用Dataframe和UDF来解析'attributes'字符串。
根据您提供的示例数据,“属性”似乎不是正确的JSON或Dict。
假设'attributes'只是一个字符串,这里是一个使用dataframe和Udf的示例代码。
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
spark = SparkSession \
.builder \
.appName("test") \
.getOrCreate()
#sample data
data=[{
"city": "Tempe",
"state": "AZ",
"attributes": [
"BikeParking: True",
"BusinessAcceptsBitcoin: False",
"BusinessAcceptsCreditCards: True",
"BusinessParking: {'garage': False, 'street': False, 'validated': False, 'lot': True, 'valet': False}",
"DogsAllowed: False",
"RestaurantsPriceRange2: 2",
"WheelchairAccessible: True"
]
}]
df=spark.sparkContext.parallelize(data).toDF()
解析字符串的用户定义函数
def get_attribute(data,attribute):
return [list_item for list_item in data if attribute in list_item][0]
注册udf
udf_get_attribute=udf(get_attribute, StringType
<强>数据帧强>
df.withColumn("BusinessAcceptsBitcoin",udf_get_attribute("attributes",lit("BusinessAcceptsBitcoin"))).select("city","BusinessAcceptsBitcoin").show(truncate=False)
示例输出
+-----+-----------------------------+
|city |BusinessAcceptsBitcoin |
+-----+-----------------------------+
|Tempe|BusinessAcceptsBitcoin: False|
+-----+-----------------------------+
您也可以使用相同的udf查询任何其他字段,例如
df.withColumn("DogsAllowed",udf_get_attribute("attributes",lit("DogsAllowed"))).select("city","DogsAllowed").show(truncate=False)