BigQuery Javascript UDF失败,超过"资源超过"

时间:2016-04-04 13:31:28

标签: google-bigquery udf

这个问题可能是

的另一个例子

BigQuery UDF memory exceeded error on multiple rows but works fine on single row

但有人建议我发帖作为问题而不是答案。

我使用javascript将日志文件解析为表格。 javascript解析函数相对简单。它工作在1M行但在3M行失败。日志文件可能比3M大很多,因此失败是一个问题。

该功能如下。

function parseLogRow(row, emit) {

    r =  (row.logrow ? row.logrow : "") + (row.l2 ? " " + row.l2 : "") + (row.l3 ? " " + row.l3 : "")
    ts = null
    category = null
    user = null
    message = null
    db = null
    seconds = null
    found = false
    if (r) {
        m = r.match(/^(\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d\d\d (\+|\-)\d\d\d\d) \[([^|]*)\|([^|]*)\|([^\]]*)\] ::( \(([\d\.]+)s\))? (.*)/ )
        if( m){
          ts = new Date(m[1])*1
          category = m[3] || null
          user = m[4] || null
          db = m[5] || null
          seconds = m[7] || null
          message = m[8] || null
          found = true
        }
        else {
          message = r
          found = false
        }
     }

    emit({
      ts:  ts,
      category: category,
      user: user,
      db: db,
      seconds: seconds*1.0,
      message: message,
      found: found
      });
  }


  bigquery.defineFunction(
    'parseLogRow',                           // Name of the function exported to SQL
    ['logrow',"l2","l3"],                    // Names of input columns
    [
      {'name': 'ts', 'type': 'float'},  // Output schema
      {'name': 'category', 'type': 'string'},
      {'name': 'user', 'type': 'string'},
      {'name': 'db', 'type': 'string'},
      {'name': 'seconds', 'type': 'float'},
      {'name': 'message', 'type': 'string'},
      {'name': 'found', 'type': 'boolean'},
    ],
    parseLogRow                          // Reference to JavaScript UDF
  );

我使用此查询引用该函数:

SELECT
    ROW_NUMBER() OVER() as row_num,
    ts,category,user,
    db,seconds,message,found,
FROM parseLogRow((SELECT * FROM[#{dataset}.today]
      LIMIT 1000000
    ))

'今天'中的一些示例数据表看起来像这样(作为CSV):

logrow,l2,l3
# Logfile created on 2015-12-29 00:00:09 -0800 by logger.rb/v1.2.7,,
2015-12-29 00:00:09.262 -0800 [INFO|7aaa0|] :: Running scheduled job: confirm running gulp process,,
2015-12-29 00:00:09.277 -0800 [DEBUG|7aaa0|] :: Restarted gulp process,,
2015-12-29 00:00:09.278 -0800 [INFO|7aaa0|] :: Completed scheduled job: confirm running gulp process,,
2015-12-29 00:00:14.343 -0800 [DEBUG|7aaa2|scheduler] :: Polling for pending tasks (master: true),,
2015-12-29 00:00:19.396 -0800 [INFO|7aaa4|] :: Running scheduled job: confirm running gulp process,,
2015-12-29 00:00:19.409 -0800 [DEBUG|7aaa4|] :: Restarted gulp process,,
2015-12-29 00:00:19.410 -0800 [INFO|7aaa4|] :: Completed scheduled job: confirm running gulp process,,
2015-12-29 00:00:29.487 -0800 [INFO|7aaa6|] :: Running scheduled job: confirm running gulp process,,
2015-12-29 00:00:29.500 -0800 [DEBUG|7aaa6|] :: Restarted gulp process,,
2015-12-29 00:00:29.500 -0800 [INFO|7aaa6|] :: Completed scheduled job: confirm running gulp process,,
2015-12-29 00:00:39.597 -0800 [INFO|7aaa8|] :: Running scheduled job: confirm running gulp process,,
2015-12-29 00:00:39.610 -0800 [DEBUG|7aaa8|] :: Restarted gulp process,,
2015-12-29 00:00:39.611 -0800 [INFO|7aaa8|] :: Completed scheduled job: confirm running gulp process,,
2015-12-29 00:00:44.659 -0800 [DEBUG|7aaaa|scheduler] :: Polling for pending tasks (master: true),,
2015-12-29 00:00:49.687 -0800 [INFO|7aaac|] :: Running scheduled job: confirm running gulp process,,
2015-12-29 00:00:49.689 -0800 [DEBUG|7aaac|] :: Restarted gulp process,,
2015-12-29 00:00:49.689 -0800 [INFO|7aaac|] :: Completed scheduled job: confirm running gulp process,,
2015-12-29 00:00:59.869 -0800 [INFO|7aaae|] :: Running scheduled job: confirm running gulp process,,
2015-12-29 00:00:59.871 -0800 [DEBUG|7aaae|] :: Restarted gulp process,,

这是一个黑客攻击,因为我将日志导入为3列表(实际上只有一列),将其导入为CSV,并将分隔符设置为tab(我们通常不会这样做)。 t在日志文件中有任何选项卡),使用查询将其转换为我真正想要的表。

我喜欢这种模式,因为解析速度快且分布均匀(当它工作时)。

失败的工作是:bigquery-looker:bquijob_260be029_153dd96cfdb。如果您需要可重复的案例,请与我联系。

任何帮助或建议都将不胜感激。

1 个答案:

答案 0 :(得分:1)

Point # 1
I dont see issue with UDF - it worked for me even on 10M rows
I think the issue rather is in use of ROW_NUMBER() OVER() - remove it and it should work!

SELECT
  ts,category,user,
  db,seconds,message,found,
FROM parseLogRow((SELECT * FROM[#{dataset}.today]
))  

Point #2
From performance prospective, below should run faster (I think) and in general I would suggest avoid using UDF in cases when "plain" BQ works as well

SELECT 
  REGEXP_EXTRACT(logrow, r'^(\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d\d\d (?:\+|\-)\d\d\d\d) \[[^|]*\|[^|]*\|[^\]]*\] :: .*') AS ts,
  REGEXP_EXTRACT(logrow, r'^(?:\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d\d\d (?:\+|\-)\d\d\d\d) \[([^|]*)\|(?:[^|]*)\|(?:[^\]]*)\] :: (?:.*)') AS category,
  REGEXP_EXTRACT(logrow, r'^(?:\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d\d\d (?:\+|\-)\d\d\d\d) \[(?:[^|]*)\|([^|]*)\|(?:[^\]]*)\] :: (?:.*)') AS user,
  REGEXP_EXTRACT(logrow, r'^(?:\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d\d\d (?:\+|\-)\d\d\d\d) \[(?:[^|]*)\|(?:[^|]*)\|([^\]]*)\] :: (?:.*)') AS db,
  REGEXP_EXTRACT(logrow, r'^(?:\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d\d\d (?:\+|\-)\d\d\d\d) \[(?:[^|]*)\|(?:[^|]*)\|(?:[^\]]*)\] :: (.*)') AS message,
  REGEXP_MATCH(logrow, r'^((?:\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d\d\d (?:\+|\-)\d\d\d\d) \[(?:[^|]*)\|(?:[^|]*)\|(?:[^\]]*)\] :: (?:.*))') AS found
FROM (
  SELECT logrow +IFNULL(' ' + l2, '') + IFNULL(' ' + l3, '') AS logrow 
  FROM YourTable    
)