我在流媒体上下文中使用了Pyspark Dataframe API,我已经在我的火花流应用程序(我使用kafka接收器)中将RDD转换为DF foreach DStream,这就是我在我的过程RDD功能:
rowRdd = data_lined_parameters.map(
lambda x: Row(SYS=x[0], METRIC='temp', SEN=x[1], OCCURENCE=x[2], THRESHOLD_HIGH=x[3], OSH=x[4], OSM=x[5], OEH=x[6], OEM=x[7],OSD=x[8],OED=x[9],REMOVE_HOLIDAYS=x[10],TS=x[11],VALUE=x[12],DAY=x[13],WEEKDAY=x[14],HOLIDAY=x[15]))
rawDataDF = sqlContext.createDataFrame(rowRdd)
rawDataRequirementsCheckedDF = rawDataDF.filter("WEEKDAY <= OED AND WEEKDAY >=OSD AND HOLIDAY = false VALUE > THRESHOLD_HIGH ")
我的下一步是使用hbase表中的新列来丰富rawDataRequirementsCheckedDF中的每一行,我的问题是从hbase(phoenix)获取数据并将其加入我原始数据帧的最有效方法:
--------------------+-------+------+---------+---+---+---+---+---+---+---------------+---+----------------+--------------+--------------------+-------+-------+
| DAY|HOLIDAY|METRIC|OCCURENCE|OED|OEH|OEM|OSD|OSH|OSM|REMOVE_HOLIDAYS|SEN| SYS|THRESHOLD_HIGH| TS| VALUE|WEEKDAY|
+--------------------+-------+------+---------+---+---+---+---+---+---+---------------+---+----------------+--------------+--------------------+-------+-------+
|2017-08-03 00:00:...| false| temp| 3| 4| 19| 59| 0| 8| 0| TRUE| 1|0201| 26|2017-08-03 16:22:...|28.4375| 3|
|2017-08-03 00:00:...| false| temp| 3| 4| 19| 59| 0| 8| 0| TRUE| 1|0201| 26|2017-08-03 16:22:...|29.4375| 3|
+--------------------+-------+------+---------+---+---+---+---+---+---+---------------+---+----------------+--------------+--------------------+-------+-------+
hbase表主键是DAY,SYS,SEN,因此它将生成具有相同格式的数据帧。
编辑:
这是我到目前为止所尝试的:
sysList = rawDataRequirementsCheckedDF.map(lambda x : "'"+x['SYS']+"'").collect()
df_sensor = sqlContext.read.format("jdbc").option("dbtable","(select DATE,SYSTEMUID,SENSORUID,OCCURENCE from ANOMALY where SYSTEMUID in ("+','.join(sysList)+") )").option("url", "jdbc:phoenix:clustdev1:2181:/hbase-unsecure").option("driver", "org.apache.phoenix.jdbc.PhoenixDriver").load()
df_anomaly = rawDataRequirementsCheckedDF.join(df_sensor, col("SYS") == col("SYSTEMUID"), 'outer')
答案 0 :(得分:1)
我从HBase引入数据的一种简单方法是将表创建为phoenix,然后加载到spark中。这是Apache Phoenix页面的Apache Spark插件部分
df = sqlContext.read \
.format("org.apache.phoenix.spark") \
.option("table", "TABLE1") \
.option("zkUrl", "localhost:2181") \
.load()
链接到Apache Spark插件:https://phoenix.apache.org/phoenix_spark.html