Spark过去30天过滤,提高性能的最佳方法

时间:2016-06-15 04:17:02

标签: performance scala hadoop apache-spark statistics

我有一个RDD的记录,转换为DataFrame,我希望按天时间戳过滤并计算最近30个每日统计数据,按列过滤并计算结果。

Spark应用程序在进入for循环之前真的很快,所以我想知道这是否是一种反模式方法,如果我使用spark cartesian,我怎么能做到这一点呢?

//FILTER PROJECT RECORDS
val clientRecordsDF = recordsDF.filter($"rowkey".contains(""+client_id))
client_records_total = clientRecordsDF.count().toLong

这是clientRecordsDF内容

root
 |-- rowkey: string (nullable = true) //CLIENT_ID-RECORD_ID
 |-- record_type: string (nullable = true)
 |-- device: string (nullable = true)
 |-- timestamp: long (nullable = false) // MILLISECOND
 |-- datestring: string (nullable = true) // yyyyMMdd

[1-575e7f80673a0,login,desktop,1465810816424,20160613]
[1-575e95fc34568,login,desktop,1465816572216,20160613]
[1-575ef88324eb7,registration,desktop,1465841795153,20160613]
[1-575efe444d2be,registration,desktop,1465843268317,20160613]
[1-575e6b6f46e26,login,desktop,1465805679292,20160613]
[1-575e960ee340f,login,desktop,1465816590932,20160613]
[1-575f1128670e7,action,mobile-phone,1465848104423,20160613]
[1-575c9a01b67fb,registration,mobile-phone,1465686529750,20160612]
[1-575dcfbb109d2,registration,mobile-phone,1465765819069,20160612]
[1-575dcbcb9021c,registration,desktop,1465764811593,20160612] 
...


the for loop with bad performances

var dayCounter = 0;
for( dayCounter <- 1 to 30){ 
    //LAST 30 DAYS

    // CREATE DAY TIMESTAMP
    var cal = Calendar.getInstance(gmt);

    cal.add(Calendar.DATE, -dayCounter);
    cal.set(Calendar.HOUR_OF_DAY, 0);
    cal.set(Calendar.MINUTE, 0);
    cal.set(Calendar.SECOND, 0);
    cal.set(Calendar.MILLISECOND, 0);
    val calTime=cal.getTime()
    val dayTime = cal.getTimeInMillis()

    cal.set(Calendar.HOUR_OF_DAY, 23);
    cal.set(Calendar.MINUTE, 59);
    cal.set(Calendar.SECOND, 59);
    cal.set(Calendar.MILLISECOND, 999);
    val dayTimeEnd = cal.getTimeInMillis()

    //FILTER PROJECT RECORDS
    val dailyClientRecordsDF = clientRecordsDF.filter(
      $"timestamp" >= dayTime && $"timestamp" <= dayTimeEnd
    )
    val daily_client_records = dailyClientRecordsDF.count().toLong

    println("dayCounter "+dayCounter+" records = "+daily_project_records);

    // perform other filter on dailyClientRecordsDF
    // save daily statistics to hbase

  }
}

3 个答案:

答案 0 :(得分:2)

这种方法遵循SQL。 首先,我注册了一个表来查询。 然后,我需要定义一个UDF(用户定义函数)来将时间戳转换为日期。 最后,您需要像在sql中那样进行过滤,并在所需的日期范围内进行分组。

    def mk(timestamp: Long): Long = {
            val blockTime: Int = 3600 * 24 // daily
          //  val blockTime: Int = 3600 // hourly
            (timestamp - timestamp % blockTime)
          }

    recordsDF.registerTempTable("client") // define your table
    sqlContext.udf.register("makeDaily", (timestamp: Long) => mk(timestamp)) // register your function

    val res = sqlContext.sql("""select makeDaily(timestamp) as date, count(*) as count 
                                from client 
                                where timestamp between 111111 and 222222 
                                group by makeDaily(timestamp)""").collect()

增加: 例如,计数所有record_type是在30天内注册的。

sqlContext.sql("select count(*) 
                from client 
                where record_type='registration' and timestamp between 1111 and 2222")

答案 1 :(得分:1)

几乎在每种情况下都应避免创建UDF。这样做会阻止Catalyst Optimizer正确处理查询。

相反,请使用内置的SQL函数:

(
  spark.read.table("table_1")
  .join(
    spark.read.table("table_2"), 
    "user_id"
  )
  .where("p_eventdate > current_date() - 30")
)

答案 2 :(得分:0)

date_sub(current_date(), 30) 1.5.0 后可用。