我在Java中有一个火花数据集,看起来像这样。
+-------+-------------------+---------------+----------+--------------------+-----+
|item_id| date_time|horizon_minutes|last_value| values|label|
+-------+-------------------+---------------+----------+--------------------+-----+
| 8|2019-04-30 09:55:00| 15| 0.0|[0.0,0.0,0.0,0.0,...| 0.0|
| 8|2019-04-30 10:00:00| 15| 0.0|[0.0,0.0,0.0,0.0,...| 0.0|
| 8|2019-04-30 10:05:00| 15| 0.0|[0.0,0.0,0.0,0.0,...| 0.0|
我想过滤数据框以仅接受月份在整数列表内的行(例如1,2,5,12)
我尝试过基于字符串的过滤器功能
rowsDS.filter("month(date_time)" ???)
但是我不知道如何包含整数条件的“ isin列表”。
我也试图通过lambda函数进行过滤,但是没有运气。
rowsDS.filter(row -> listofints.contains(row.getDate(1).getMonth()))
Evaluation failed. Reason(s):
Lambda expressions cannot be used in an evaluation expression
有没有简单的方法可以做到这一点?我最好使用lambda函数,因为我不太喜欢第一个示例的SparkSQL基于字符串的过滤器。
答案 0 :(得分:1)
对于数据框:
# Make the first (band) dimension the last
plt.imshow(numpy.moveaxis(data, 0, -1))
在Java中:
val result = df.where(month($"date_time").isin(2, 3, 4))
要获取Java中的“ col”和“ month”功能:
Dataset<Row> result = df.where(month(col("date_time")).isin(2, 3, 4));
答案 1 :(得分:0)
我的例子:
val seq1 = Seq(
("A", "abc", 0.1, 0.0, 0),
("B", "def", 0.15, 0.5, 0),
("C", "ghi", 0.2, 0.2, 1),
("D", "jkl", 1.1, 0.1, 0),
("E", "mno", 0.1, 0.1, 0)
)
val ls = List("A", "B")
val df1 = ss.sparkContext.makeRDD(seq1).toDF("cA", "cB", "cC", "cD", "cE")
def rawFilterFunc(r: String) = ls.contains(r)
ss.udf.register("ff", rawFilterFunc _)
df1.filter(callUDF("ff", df1("cA"))).show()
给出输出:
+---+---+----+---+---+
| cA| cB| cC| cD| cE|
+---+---+----+---+---+
| A|abc| 0.1|0.0| 0|
| B|def|0.15|0.5| 0|
+---+---+----+---+---+